Skip to main content

Accurate compressive strength prediction using machine learning algorithms and optimization techniques


Numerous components' complex interrelationships and interconnectedness present a formidable obstacle in developing mix designs for high-performance concrete (HPC) formulation. The effectiveness of machine learning (ML) algorithms in resolving this paradox has been illustrated. However, they are classified as opaque black-box models due to the lack of a discernible correlation between blend ratios and compressive durability. The present study proposes a semi-empirical methodology that integrates various techniques, including non-dimensionalization and optimization, to overcome this constraint. The methodology exhibits a noteworthy level of accuracy when forecasting compressive strength (CS) across a spectrum of divergent datasets, thus evincing its extensive and all-encompassing efficacy. Moreover, the precise relationship that semi-empirical equations convey is of great significance to practitioners and researchers in this field, especially with respect to their predictive abilities. The determination of CS in concrete is a critical facet of the design of HPC. An exhaustive comprehension of the intricate interplay between manifold factors is requisite to attain an ideal blend proportion. The study’s findings indicate that \(RF\) can accurately predict \(CS\). Moreover, the combination of optimization algorithms significantly enhances the model’s effectiveness. Among the three Optimization Algorithms under consideration, the COA optimizer has exhibited superior performance in augmenting the accuracy and precision of the RF prediction model for CS. As a result, RFCO obtained the more suitable value of R2 and RMSE obtained 0.998 and 0.88, alternatively.


Modern engineering constructions make extensive use of concrete as their main building material. The erection of concrete constructions within complex settings requires the utilization of high-performance concrete (HPC), which exhibits superior attributes to normal concrete, such as high strength, durability, etc. HPC is a heterogeneous material comprising superior-quality cement, coarse and fine aggregates, water, and admixtures. HPC, in which performance is not limited to strength but include construction, and this concrete displays superior characteristics in term of its strength, workability, and durability [1,2,3]. HPC has been extensively utilized across various domains in the construction industry, exemplifying its applicability in developing diverse structures such as houses, bridges, and other significant components. The utilization of concrete admixtures can potentially lead to a reduction in the dimensions and weight of concrete structures. While it may contribute to lesser usage of building materials and enhanced durability, its direct impact on prolonged usability is subject to consideration and may vary. The development of HPC can be achieved by integrating various improving agents, such as chemical admixtures, fibrous materials, and mineral admixtures into the composition of the concrete mixture [4, 5].

Compressive strength \((CS)\) of a material is determined by a number of parameters, including intrinsic qualities like porosity and density and material attributes like composition and grade. Additionally, the loading rate at which the loading force is applied acting a crucial role in determining the \(CS\) of the material. The \(\mathrm{CS}\) of construction materials, notably concrete, plays a significant role in ascertaining the robustness and sustainability of the structure. The quantification of the \(\mathrm{CS}\) of concrete is conventionally denoted in units of pounds per square inch (psi) and serves as a metric of its resistance to compressive forces [6]. The relationship between the CS of concrete and its load-bearing capacity can be established such that greater CS values correspond to an increased ability to sustain weight without succumbing to fracture or cracking. CS represents a vital parameter in engineering and construction, as it is imperative to ensure structures' safe and steady performance. Consequently, it is imperative to thoroughly examine and validate the compressive resilience of materials before their integration into construction endeavors [7].

Accurately predicting specific strengths is paramount to civil infrastructure’s material efficiency and structural stability. Inadequate recognition of the innate durability of concrete could lead to unnecessary usage of cement, amplifying carbon dioxide emission [8]. A great deal of work has been done recently to create predictive models that show the link between the strength of concrete and its individual components. This approach has been adopted in light of the aforementioned scenario. A prediction model should ideally offer significant explanations that improve tangible structures endowed with exceptional constructability and durability while minimizing cost expenditures [9,10,11]. As a result, many models have emerged that use physics or chemistry-based relationships as a foundation. While conventional techniques have been pivotal in ascertaining robust correlations between critical parameters such as cement dosage, aggregate fraction, and air void content, with regard to concrete strength, the analysis of the compound effects originating from these characteristics remains a complex and onerous task [12,13,14,15].

Naseri et al. [16] aimed to investigate the mixture design of sustainable concrete using six machine learning techniques, including a water cycle algorithm, soccer league competition algorithm, genetic algorithm, artificial neural network, support vector machine, and regression. The accuracy of these methods was compared based on performance indicators, and the water cycle algorithm was found to be the most precise model, with a \(2.86 \mathrm{MPa}\) mean absolute error. Six objective functions were defined and applied to integrate sustainability criteria, including compressive strength, cost, embodied CO2 emission, and energy and resource consumption.

Within the discipline of Artificial Intelligence, machine learning is the study of developing algorithms that can learn from datasets and become more proficient over time. Machine learning (ML) is a notable benefit in effectively handling extensive and sophisticated datasets. This capability allows for identifying underlying patterns and producing precise predictions with remarkable accuracy. The advent of numerous ML methods, encompassing supervised and reinforcement learning, unsupervised learning, deep learning, and various others, has been documented [17,18,19]. \(\mathrm{ML}\) has attained significant prominence and has been extensively implemented across varied sectors, encompassing healthcare, finance, manufacturing, and transportation. ML possesses the potential to be employed in the healthcare sector to analyze medical images as well as detect potential anomalies. In the finance domain, the implementation of ML has the potential to yield significant benefits in detecting fraudulent activities and reducing the risks associated with financial undertakings, as has been suggested in relevant literature [20,21,22].

Barkhordari et al. [23] compared different ensemble learner algorithms for predicting the compressive strength of fly ash concrete (FAC). Separate stacking with the random forest meta-learner achieved the most accurate predictions, with a coefficient of determination of 97.6% and the lowest mean square error and variance. The SSE-Random Forest algorithm performed well in prediction accuracy, with the largest R2 (0.976) and smallest MSE (0.0041) for the test set. The SSE-Gradient Boosting model also performed well, with an MSE of 0.005 and R2 of 0.997 for the training phase. Naseri et al. [24] tackled the challenge of pre-fabrication estimation of concrete compressive strength, advocating for efficient alternatives to labor-intensive experimental methods. Investigating the influence of materials and sample age on fly ash concrete strength, a novel predictive method was introduced, utilizing the water cycle and genetic algorithms. Comparative analysis revealed the water cycle algorithm as the most accurate model, surpassing classical regression models. Concrete mixtures with less than \(35\%\) fly ash by weight of the binder displayed maximum CS, with a notable decline beyond this threshold. These findings shed light on optimizing concrete mixture proportions for enhanced strength, bolstering sustainability and efficiency in production.

The random forest (RF) algorithm is a widely applied ML field technique commonly used for regression and classification tasks. The proposed approach constitutes a form of collaborative learning mechanics whereby numerous decision trees are integrated to enhance the accuracy of prognostications [25]. In an RF model, the construction of each decision tree involves the utilization of a random subset of both the data entries and the features. The RF model operates by combining the outcomes of numerous decision trees, thereby diminishing the risk of overfitting and enhancing the overall precision of the model. The process of randomizing the data and features employed in individual decision trees contributes to enhancing the resilience and adaptability of the model. RF models have found application in diverse fields, ranging from finance and healthcare to environmental science [26]. These models have been leveraged to perform tasks such as forecasting stock prices, diagnosing illnesses, and pinpointing environmental factors that may trigger disease outbreaks. The RF algorithm is a potent and versatile ML methodology that has demonstrated considerable efficacy across numerous domains [27].

The present study employs the random forest (RF) algorithm for compressive strength (CS) of HPC prediction owing to its proficient handling of intricate systems and multifarious parameters through \(\mathrm{ML}\) methods. Enhancement procedures remained additionally deployed to enhance the accuracy of the \(\mathrm{HPC}\) organizations. In addition, optimization algorithms are mathematical techniques utilized to locate the most optimal outcome for a particular problem. These algorithms have been extensively applied to optimize diverse parameters linked to the configuration of \(\mathrm{HPC}\) systems. The subsequent section delineates three optimization algorithms, namely the rider optimization algorithm (ROA), black widow optimization algorithm (BWOA), and COOT optimization algorithm (COA). The present study introduces an innovative methodology for predicting \(\mathrm{CS}\) by integrating \(\mathrm{RF}\) with three optimization algorithms.1The present methodology holds substantial potential as a valuable instrument for geotechnical engineers to enhance the design of retaining structures made of \(\mathrm{CS}\). The paper introduces a novel hybrid approach, integrating non-dimensionalization, optimization, and ML algorithms to enhance predictive models for HPC. It addresses the complexity of HPC mix designs by accurately forecasting compressive strength and optimizing blend proportions. Notably, it emphasizes the interpretability of ML models, which is crucial for practical engineering applications. The study advocates for a comprehensive life-cycle assessment of HPC, considering long-term durability and sustainability. Collaborative interdisciplinary efforts involving material science, civil engineering, and computer science are highlighted for advancing sustainable and efficient HPC formulations and practices.


Data assembly

Supervised machine learning \((\mathrm{ML})\) algorithms require numerous input variables to predict the compressive strength \((\mathrm{CS})\) of HPC. The data in the present study were procured from antecedently published literature and the test data mentioned in Appendix 1 in Table 6 [28]. The employed models utilized a total of eight input variables, namely water (W), binder (B), fly ash (FA), micro silica (MS), coarse aggregate (RCA), superplasticizers (SP), total aggregated (TA), and age. The dependent variable employed in the models under analysis was CS. The model's results exhibit a considerable dependence on both the number of data points utilized and the number of input parameters. The present investigation employed 168 data points (i.e., mixes) to forecast the characteristics of HPC. The RF model was executed utilizing Python programming language in the Anaconda environment, while the Python software was employed to facilitate its implementation. An examination was conducted on the relative distribution of each parameter implemented in the1mixes, and a report containing the comprehensive descriptive statistical analysis of these parameters can be found in Tables 1 and 2 for training and testing, respectively.

Table 1 The training phase's input and output variables' statistical characteristics
Table 2 The statistical properties of inputs and output variables in the testing phase

Table 3 presents the correlation matrix showing the relationships between the input parameters (B, FA/B, MS/B, CA/B, CA/TA, W/B, SP/B, Age) and the output (CS). Correlation values range from − 1 to 1, where − 1 indicates a perfect negative correlation, 0 indicates no correlation, and 1 indicates a perfect positive correlation.

Table 3 Correlation between the inputs and output

Random forest

Principle of RF

A collection of tree-structured classifiers expressed as a random forest classifier \(\left\{\mathrm{b}\left(\mathrm{x},{\mathrm{\aleph }}_{\mathrm{l}}\right),\mathrm{ q}=1,\dots \right\}\), where the preferred category for a provided input, denoted as x is determined by each tree casting a unit vote. Here, the \(\{ {\mathrm{\aleph }}_{\mathrm{l}}\}\) represent separate random vectors with identical distributions.

Several tree-structured classifiers created with the use of a random variable and a training sample set make up a random forest, \(\{ {\mathrm{\aleph }}_{\mathrm{l}}\}\), for the \(q\) -th tree in Breiman's model [29]. A classifier is produced as a result of the stochastic factors being sovereign and uniformly spread among a pair of trees.\(b\left(x,{\aleph }_{l}\right)\), where the input vector is represented by \(x\). By iterating the procedure l instances produce a sequence of classifier.sets \(\left\{{b}_{1}\left(x\right), {b}_{2}\left(x\right), ..., {b}_{q}\left(x\right)\right\}\) is produced. It may be applied to generate several models of categorization. The decision function is computed in accordance with the typical majority vote that determines the system's ultimate output.


The amalgamation of several distinct decisions tree replicas is represented by way of \(B(x)\), with every tree having the ability to cast a ballot aimed at the superior choice categorization outcome for particular contribution parameters The indicator function is represented by the symbol \(F(.)\), and the output variable is \(V\) [30]. The procedure of depicting the optimal categorization result is demonstrated in the depicted Fig. 1.

Fig. 1
figure 1

Schematic of random forest

RF’s characters

The purpose of boundary [31], which is used in \(RF\) to determine how much the mean quantity of votes in favor of the right session at. \(X\),\(V\) exceeds the amount for the wrong class, is as follows:

$$mc\left(X,V\right)={av}_{l}F\left({b}_{l}\left(X\right)=V\right)-{max}_{j\ne V}{av}_{l}F({b}_{l}\left(X\right)=j)$$

A larger value in the margin function indicates a higher degree of accuracy and confidence in the classification forecast. According to the constraints given by the variables \(V\) and \(l,\) the function \(mc(X,V)\) defined in Eq. (2) includes averaging certain values received from the function \(F\) applied to bl \((X)\) and comparing these values using a maximum operation. A deeper comprehension of the context and the particular functions involved would be necessary to determine the precise interpretation and meaning. This classifier’s generalization error is defined as follows:


\(\mathrm{Leo Breiman}\) proved the unpredictability of \({b}_{q}\left(X\right)=b\left(x,{\aleph }_{q}\right),\) obeys the strong rule of large numbers if there are enough number of decision trees. For nearly all sequences, \(OS*\) converges to a certain value as the quantity of choice arboreal structures rises of \({\aleph }_{1}\) Breiman furthermore, it was exemplified that \(RF\) does not exhibit vulnerability to overfitting and could provide the generalization error’s limiting value.

$${O}_{x,V}({O}_{\theta }\left({b}_{l}\left(x,\theta \right)=V\right)-{max}_{j\ne V}{O}_{\theta }(b\left(x,\theta \right)=j)<0)$$

Leo Breiman also deduced that there exists an upper bound for the simplification mistake:

$${OS}^{*}\le \overline{\beta }(1-{z}^{2})/{z}^{2}$$

The generalization error of Random Forest (RF) is impacted by a pair of variables: the potency of individual trees within the forest, as indicated by (z), and the average correlation value \(\overline{\beta }\), which shows the relationship between the trees. A reduced level of correlation indicates diminished mutual reliance among the trees, leading to enhanced performance of the Random Forest (RF) [32].

Rider optimization algorithm (ROA)

The ROA algorithm is typically formulated based on a group of riders collaborating to achieve a specific position [33]. The place where the \({z}^{th}\) rider at that moment \(V\) is represented by \({V}^{ti}(z,s)\). Furthermore, the composition of the rider team is determined by adding the number of bypassers (Bi), followers (Fi), overtakers (Oi), and attackers \({(A}_{i})\) [34].

$${V}^{ti}=\left\{{V}^{ti}\left(z,s\right)\right\};1\le z\le D; 1\le c\le C.$$

For the \({z}^{th}\) rider, the representation of the angles related to the location, steering, and vehicle coordinates by \({\theta }_{z},{({V}_{i})}_{(z,s)}^{ ti+1}\) and \(\varphi\). Furthermore, significant vehicle characteristics for the \({z}^{th}\) accelerator is included with the rider (\({ai}_{z}\)), brake \(({br}_{z})\), and gear (\({Ei}_{z}\)). While the gear value runs from \([0]\) to \([4]\), the brake and accelerator ranges from \([0]\) to \([1]\).

Let us say the bypass rider takes a conventional route instead of the leader’s. In such a scenario, using Eq. (7), the location update for this group is chosen at random, where \(\lambda\) represents a random value between \([0]\) and \([1]\), \(\varphi\) denotes a random number between \([1]\) and \(D, \rho\) denotes a value between \([1]\) and \(D\), and \(\eta\) shows an arbitrary value between \([1]\) and \([0]\) of size \(C\times 1\).

$${X}_{ti+1}^{Ci}(z,s) =\lambda [{V}^{ti}(t,s) * \eta (s) + {V}^{ti}(\rho ,\mathrm{ s}) * [1 - \eta (\mathrm{s})]].$$

Thus, in order to reach the objective, the positions of the bypass riders are updated, and the follower, using the coordinate selection given in Eq. (8), modifies their placement in accordance with the location of the leading rider. In this equation, the coordinate selector is denoted by \({X}^{Ri}\) represents the location of the leader, \(Ri\) represents the leader's index, \({Di}_{z,s}^{ti+1}\) denotes the steering angle of the \({z}^{th}\) rider in the \({q}^{th}\) coordinate, and \({g}_{z}^{ti}\) a represents the distance that the \({z}^{th}a\) rider needs to cover, computed by multiplying the off-time rate by the rider’s velocity.


Based on the three factors listed in Eq. (9) the motorcyclists who are overtaking change their position: the direction indication, relative success rate, and coordinate selector. In this equation, \({X}_{ti}\left(z,q\right)\) represents the location of the \({z}^{th}\) rider in the \({q}^{th}\) coordinate, while \({Ci}_{ti}^{It}(z)\) denotes the direction indicator of the rider's movement.


The generalized distance vector is calculated to determine the coordinate selection it entails deducting the place of the \({z}^{th}a\) rider from that of the leader. Similarly, the attacker rider uses the same updating mechanism as the follower in an attempt to take the lead [35]. But unlike the follower, the attacker changes all coordinates instead of only a subset of them, as shown by Eq. (10):


According to Eq. 11: the activity counter uses a value of \([1]\) when the “on” rider’s success rate exceeds the predefined rate and \([0]\) for trailing.

$${Ai}_{n}^{ti+1}\left(z\right)=\left\{\begin{array}{c}1; if \;{p}_{TI+1}(z)>{p}_{ti}(z)\\ 0; otherwise \end{array}\right.$$

The steering angle is updated by the activity counter, as shown in Eq. (12).

$${Hi}_{z,s}^{ti+1}=\left\{\begin{array}{c}{Hi}_{z+1,s}^{ti} if {Ai}_{n}^{ti+1}\left(z\right)=1\\ {Hi}_{z-1,s}^{ti} if {Ai}_{n}^{ti+1}\left(z\right)=0\end{array}\right.$$

As stated in Eq. (13), updating the gear entails selecting the greater value depending on the activity counter.

$${Ei}_{z}^{ti+1}=\left\{\begin{array}{c}{Ei}_{z}^{ti}+1 if {Ai}_{n}^{ti+1}\left(z\right)=1,{Ei}_{z}^{ti}\ne \left|Ei\right|\\ {Ei}_{z}^{ti}-1 if {Ai}_{n}^{ti+1}\left(z\right)=0,{Ei}_{z}^{ti}\ne 0\\ {Ei}_{z}^{ti}, otherwise \end{array}\right.$$

Black widow optimization algorithm (BWOA)

The BWOA is a meta-heuristic algorithm that integrates evolutionary algorithms with distinct criteria based on the reproductive behavior exhibited by black widow spiders [36]. The BWOA algorithm emulates the procreation behavior of Latrodectus mactans, commonly known as black widow spiders, which entails a multifaceted mechanism of assortment and propagation aimed at generating novel progeny. The BWOA algorithm presents a distinctive and efficacious methodology for addressing intricate optimization problems, rendering it capable of circumventing local optima and converging promptly towards optimal solutions, thanks to its aptitude for upholding equilibrium between its exploration and exploitation phases. Such a combination of attributes contributes to its remarkable effectiveness [37, 38]. Furthermore, Fig. 2 displays the BWO flowchart.

Fig. 2
figure 2

Flowchart of the Black Widow Optimization algorithm

The primary phases of BWO may be summed up as follows in brief:

  • 1: Initialization

Each widow can be represented in the population in this stage, which is made up of the number of widows with size M as an array of \(1\times {M}_{var}\) representing the solution to the problem. This array can be defined as \(\mathrm{widow}=({x}_{1},{x}_{2},\dots , {x}_{{M}_{var}})\), where \({M}_{var}\) is the dimension of the optimization problem. Also, \({M}_{var}\) can be defined as the quantity of threshold values that the program must obtain, while \({x}_{i}\) is the \(i-th\) candidate solution.

The fitness of a widow is obtained by evaluation of the fitness function of \(f\) of each widow of the set \(({x}_{1},{x}_{2},\dots , {x}_{{M}_{var}})\). Then \(\mathrm{fitness}=f (widow),\) which can be represented by: \(\mathrm{fitness}=({x}_{1},{x}_{2},\dots , {x}_{{M}_{var}})\). Subsequently, the procreation process entails randomly selecting pairs of parents who engage in the mating process, during which the female black widow consumes the male, either during or after copulation.

  • 2: Procreate

In the procreation step, an alpha should be created as long as a widow array containing random numbers. Then, offspring are produced by using α and Eq. (14) in which \({x}_{1}\) and \({x}_{2}\) are parents, \({y}_{1}\) and \({y}_{2}\) are offspring. The crossover result is evaluated and stored.

$${y}_{1}=\beta \times {x}_{1}+\left(1-\beta \right)\times {x}_{2} and {y}_{2}=\beta \times {x}_{2}+(1-\beta )\times {x}_{1}$$

COOT optimization algorithm(COA)

The COOT optimization algorithm is predicated on the distinct movement patterns exhibited by coot populations on water surfaces. Coots are diminutive avian species that exhibit collective behaviors on aquatic surfaces, primarily aimed at approaching food sources or predetermined locations [39]. The algorithm's procedural set of instructions is stipulated as follows:

The population shall be initialized through a randomized process following Eq. (15):

$$CP\left(i\right)=rand \left(1,b\right)\times \left(vc-kc\right)+kc$$

\(CP\left(i\right)\) represents the position of the \(i-th\) coot, while d refers to the number of variables or dimensions in the optimization problem. The search space is defined by the upper bound \(vc\) and lower bound \(kc\), which determine the maximum and minimum values for each variable in the problem space. Specifically, \(vc\) and \(kc\) define the range of the search space for the optimization problem.

$$vc=\left[{vc}_{1},{vc}_{2},\dots ,{vc}_{b}\right], kc=\left[{kc}_{1},{kc}_{2},\dots ,{kc}_{b}\right]$$

Once the population is initialized, the position of each coot undergoes updates based on four distinct movement behaviors.

Random movement

Equation (17) is used to randomly initialize a position Q for the first step of this movement.

$$G=rand \,\left(1,b\right)\times \left(vc-kc\right)+kc$$

To prevent being stuck in a local optimum, the position is modified using Eq. (18):

$$CP\left(i\right)=CP\left(i\right)+E\times {S}_{2}\times (G-CP\left(i\right))$$

Eq. (18) is used to determine the value of E, which is then utilized in Eq. (19) along with a random number \({S}_{2}\) in the range of [0, 1].

$$E=1-Z\times (\frac{1}{Iter})$$

The variable \(Iter\) represents the upper limit of iterations, while \(Z\) denotes the current number of iterations.

Chain movement

To execute the chain movement, the average position of two coots can be determined by utilizing Eq. (20):


where \(CP\left(i-1\right)\) is the location of the second coot bird.

Adjusting position according to the leader

During the leadership movement, a coot bird updates its position based on the position of the leader within its group. Specifically, a coot bird follower moves towards the leader in its group. The leader is chosen using Eq. (21):

$$P=1+(i\, MOD \,MZ)$$

Eq. (21) utilizes P to denote the leader’s number, \(i\) for the follower's number, and \(MZ\) for the total number of leaders [40].

During the switch movement, the position of a coot bird is updated by utilizing Eq. (22):

$$CP\left(i\right)=LP\left(P\right)+2\times {S}_{1}\times \mathrm{cos}\left(2s\pi \right)\times (LP\left(P\right)-CP\left(i\right))$$

Eq. (22) employs \(CP\left(i\right)\) to represent the current position of the coot bird, \(LP\left(P\right)\) for the position of the chosen leader, \({S}_{1}\) for a random number in the range of [0, 1], and R for a random number in the interval of [− 1, 1].

Leander movement

The leader must transition from the current local to the global optimal position to locate the optimal position [41]. This is accomplished by updating the leader's position using Eq. (23):

$$LP\left(i\right)=\left\{\begin{array}{c}B\times {S}_{3}\times \mathrm{cos}\left(2\pi S\right)\times \left(qBest-LP\left(i\right)\right)+qBest {S}_{4}<0.5\\ B\times {S}_{3}\times \mathrm{cos}\left(2\pi S\right)\times \left(qBest-LP\left(i\right)\right)-qBest {S}_{4}\ge 0.5 \end{array}\right.$$

Eq. (23) utilizes \(q\mathrm{Best}\) to denote the best possible position, \({S}_{3}\) and \({S}_{4}\) as random numbers in the range of [0, 1], and S as a random number in the interval of [-1, 1]. Eq. (24) is utilized to determine the value of B.

$$B=2-Z\times (\frac{1}{Iter})$$

Performance evaluation methods

As previously stated, this study employs a number of measures, including the coefficient of persistence \((\mathrm{CP}),\) mean square error \((\mathrm{MSE}),\) mean absolute relative error \((\mathrm{MARE})\), correlation coefficient (R2), and root mean square error \((\mathrm{RMSE})\), to assess the models. To compute these metrics, apply Eqs. (25), 26, (27), (28) and (29):

$${R}^{2}={\left(\frac{{\sum }_{i=1}^{u}\left({z}_{i}-\overline{z }\right)\left({e}_{i}-\overline{e }\right)}{\sqrt{\left[{\sum }_{i=1}^{w}{\left({z}_{i}-z\right)}^{2}\right]\left[{\sum }_{i=1}^{u}{\left({e}_{i}-\overline{e }\right)}^{2}\right]}}\right)}^{2}$$
$$\mathrm{RMSE}=\sqrt{\frac{1}{U}{\sum }_{i=1}^{u}{\left({e}_{i}-{z}_{i}\right)}^{2}}$$
$$\mathrm{MARE}=\frac{1}{U}\sum_{j}^{u}\frac{\left|{z}_{i}-{e}_{i}\right|}{\left|\overline{z }-\overline{e }\right|}$$

Here, \({e}_{i}\) and \({z}_{i}\) show the experimental and predicted parameters, correspondingly. The cruel standards of the experimental and predicted data points are symbolized through \(\overline{e }\) and \(\overline{z }\). On the other hand, \(U\) indicates how many samples are being taken into account.

Discussion and results

This unit deals with the assessment of the recently launched cross cars. Training and testing include the two categories of performance metrics; \(70\%\) of the instances in the dataset are used intended for the purpose of instruction, with the remainder \(30.\%\) used for challenging. A greater number is desirable in the case of the R2 measure; as for the various measurements, the goal is to minimize the fault and get the best possible result. Slightly increase or decrease in the presentation measures during the trying stage is indicative of how well or poorly the model was trained during the training stage. Table 4 presents an evaluation of the models’ performance. RFCOtrain = 0.9981 had the greatest R^2 value, while RFROtest = 0.9778 had the lowest value. The RFCOtest yielded the most appropriate values in \(\mathrm{RMSE}\) and \(\mathrm{CP}\), which were \(0.8766\) and \(0.327\), respectively. RFCOtest obtained the greatest value in \(\mathrm{MARE}\), \(0.0096,\) while RFROtrain got the lowest value, \(0.0342\), similar to the other two error assessors. RFROtest had the most acceptable result with a score of \(6.829\) with regard to \(\mathrm{MSE}\), which is the the greatest worth of the relevant presentation criteria; RFCOtest obtained the lowest score of \(0.7685\).

Table 4 The consequences obtained excluding the amalgamated designs

Table 5 provides a comparative analysis between the current study and previously published articles concerning compressive strength prediction. The table presents the models used in each study and the corresponding performance metrics, including R2 and RMSE. The models used in the present study (RFCO) achieved a high R2 of 0.9981 and a low RMSE of 0.880 compared to those used in the referenced published papers. This demonstrates the effectiveness and accuracy of the RFCO model in predicting CS.

Table 5 The comparison between the present work and published articles

A scatter plot comparing the expected and actual results for three different hybrid models RFCO, RFBW, and RFRO is shown in Fig. 3. To depict the separate training and testing stages, the current methodology uses two linear fits, a scatter plot, and a centerline. The scatter plot that is displayed shows a pronounced affirmative correlation between the actual and anticipated standards for each of the three models, indicating that the models are highly accurate in predicting the values Nonetheless, the scatter plot reveals that RFCO exhibits the highest degree of data point clustering around the linear fit lines, implying superior accuracy among the three models. The correlation between RFBW and RFRO is strong, although the data points exhibit greater dispersion. Both models’ linear regression lines show a similar slope and intercept, suggesting that they have similar predictive abilities.

Fig. 3
figure 3

The hybrid model's created scatter plot

Figure 4 depicts a column plot that presents a comparative analysis between three hybrid models’ predicted and measured samples. The plot exhibits the degree to which the anticipated values conform with the observed values, effectively spotlighting the efficacy of the models. The results demonstrate that RFCO achieves a notable degree of precision, as evidenced by the close correspondence between predicted and measured values across the entirety of the dataset. The findings suggest a robust association between the projected and observed outcomes in both RFBW and RFRO, albeit with a marginally higher degree of discrepancies from the empirical data. This observation indicates that although RFBW and RFRO exhibit efficacy, they may lack the accuracy offered by RFCO.

Fig. 4
figure 4

A comparison between the measured and anticipated samples

The box plot in Fig. 5 illustrates the percentage of errors for the models presented. During the training phase, RFCO exhibited a mean error rate of 0%, accompanied by a distinct normal distribution and demonstrated minuscule dispersion. The distribution of errors exhibited favorable characteristics, as the values remained below the 10% threshold. In contrast, RFBW exhibited dispersion in both phases, and a more symmetrical and uniform normal distribution was observed. However, the attained model exhibited an error percentage that did not exceed 10% maximum. The RFRO exhibited the most notable and varied discrepancies; however, an aberrant datum was solely obtained during the assessment stage and constituted more than 10% of the dataset, a rarity in statistical analysis. The Gaussian distribution concerning the RFBW exhibited a greater degree of dispersion compared1to the1other two1models and a reduced frequency of incidence in the vicinity of zero. As a broad observation, each of the three models exhibited satisfactory performance; however, the model denoted as RFCO demonstrated the preeminent outcomes among them.

Fig. 5
figure 5

The violin-scatter plot shown in the picture illustrates the distribution of errors among the hybrid models

Figure 6 shows the analysis using the Taylor diagram. The Taylor diagram comprehensively compares multiple models based on correlation, standard deviation, and RMSE. RFCO demonstrated the highest performance among the models assessed, followed by RFBW and RFRO. The superior performance of RFCO, as indicated by its placement in the Taylor diagram, suggests that it achieved a remarkable balance between correlation, standard deviation, and RMSE in compressive strength prediction. The RFBW model also showcased commendable performance, securing a close second in overall performance. RFRO, although slightly below RFBW, displayed a notable level of accuracy and reliability in predicting compressive strength.

Fig. 6
figure 6

Taylor diagram for the presented models

The insights gained from the hybrid models, integrating ML algorithms with techniques like non-dimensionalization and optimization, can be applied in several practical engineering applications within the field of HPC formulation:

  1. 1.

    Optimal mix design: the hybrid models can guide engineers in selecting optimal mix designs for HPC, considering various components and their interrelationships. This can lead to formulations with improved CS and other desired properties.

  2. 2.

    Resource optimization: by accurately predicting CS, engineers can optimize the use of raw materials, minimizing waste and reducing costs while maintaining the desired performance of the concrete.

  3. 3.

    Structural design and durability assessment: CS predictions are crucial in structural design. Hybrid models can aid in assessing the durability and performance of HPC in specific structural applications, allowing for better design choices and enhancing the lifespan of structures.

  4. 4.

    Quality control and assurance: predictive models can be utilized for quality control during the production of HPC, ensuring that the concrete meets the desired strength requirements before it is used in construction projects.

  5. 5.

    Real-time monitoring and decision-making: ML algorithms can be adapted to continuously monitor and predict concrete strength during curing or after construction. This real-time feedback can help adjust construction schedules or make necessary modifications to ensure structural integrity.

In addition, potential limitations and areas for further research:

  1. 1.

    Data availability and quality: the availability of comprehensive and high-quality data is critical for the accuracy and effectiveness of predictive models. Further research should focus on improving data collection and standardization within the concrete industry.

  2. 2.

    Model interpretability: addressing ML models’ ‘black-box’ nature is essential for broader adoption. Research should aim to enhance the interpretability of these models, making the predictions more understandable to engineers and stakeholders.

  3. 3.

    Incorporating additional parameters: Extending the models to consider more parameters, such as environmental conditions, curing processes, and construction practices, can enhance the accuracy and applicability of the predictions.

  4. 4.

    Generalization and transferability: research should focus on enhancing the generalization of models across diverse geographical and climatic regions, considering different raw materials and mixed design practices.

  5. 5.

    Robustness to variability: investigate the robustness of models to variations in raw material properties and other external factors, ensuring that predictions remain accurate and reliable under different conditions.


High-performance concrete, or \(\mathrm{HPC}\), is well known for its remarkable strength, durability, and workability. In construction engineering, concrete’s compressive strength \((\mathrm{CS})\) is widely acknowledged as a crucial mechanical attribute. One practical approach to dealing with this specific problem is to apply machine learning \((\mathrm{ML})\). The aim of this work was to forecast the fatigue life of coiled tubing in \(\mathrm{HPC}\) applications using the random forest \((\mathrm{RF})\) \(\mathrm{ML}\) technique. In order to increase the accuracy of the findings, the current study used an amalgamation strategy fusing the. \(\mathrm{RF}\) model by optimization methods, for example, COA, ROA, and BWOA. The presentation of the model was assessed by means of the R2, RMSE, CP, MSE, and MARE indices. The findings show that, in comparison to the RFRO and RFBW models, the RFCO models perform better, showing fewer error signs. The best RMSE values were shown by the RF with RFCO models in both the training and testing stages. The restricted distribution range displayed by these models suggests a precise and dependable capacity to forecast HPC. All models, however, showed a consistent percentage of mistakes, indicating that more improvements are required. Research indicates that RF hybrid models., more particularly the RFCO models., are highly effective at forecasting HPC., which provides accurate and consistent results for a range of engineering uses. Future research can enhance predictive models for HPC, incorporating diverse factors like environmental conditions, curing techniques, and sustainable materials. Integration of real-time sensor data and advanced imaging can enrich model insights. Addressing ML model interpretability in HPC is crucial. A comprehensive life-cycle assessment of HPC, considering durability and sustainability beyond early strength, is essential. Collaborative interdisciplinary efforts involving material science, civil engineering, and computer science are key to advancing sustainable, resilient, and efficient construction practices.

Availability of data and materials

Data can be shared upon request.


  1. Sadowski Ł, Nikoo M, Nikoo M (2018) Concrete compressive strength prediction using the imperialist competitive algorithm. Comput Concr An Int J 22(4):355–363

    Google Scholar 

  2. Masoumi F, Najjar-Ghabel S, Safarzadeh A, Sadaghat B (2020) Automatic calibration of the groundwater simulation model with high parameter dimensionality using sequential uncertainty fitting approach. Water Supply 20(8):3487–3501.

    Article  Google Scholar 

  3. De Larrard F, Malier Y (2018) Engineering properties of very high performance concretes, in High Performance Concrete. CRC Press, Oxfordshire, pp 85–114

    Book  Google Scholar 

  4. Afroughsabet V, Biolzi L, Ozbakkaloglu T (2016) High-performance fiber-reinforced concrete: a review. J Mater Sci 51(14):6517–6551

    Article  Google Scholar 

  5. Chou J-S, Pham A-D (2013) Enhanced artificial intelligence for ensemble approach to predicting high performance concrete compressive strength. Constr Build Mater 49:554–563

    Article  Google Scholar 

  6. Zhao J, Shi L (2023) Predicting the compressive strength of High-performance concrete by using Radial basis function with optimization Improved Grey Wolf optimizer and Dragonfly algorithm. J Intell Fuzzy Syst 45(3):4089–103

    Article  Google Scholar 

  7. Nikoo M, Torabian Moghadam F, Sadowski Ł (2015) Prediction of concrete compressive strength by evolutionary artificial neural networks. Adv Mater Sci Eng 2015

  8. Qian X, Wang J, Fang Y, Wang L (2018) Carbon dioxide as an admixture for better performance of OPC-based concrete. J CO2 Util 25:31–38

    Article  Google Scholar 

  9. Zhang X, Akber MZ, Zheng W (2021) Prediction of seven-day compressive strength of field concrete. Constr Build Mater 305:124604

    Article  Google Scholar 

  10. Naseri H, Hosseini P, Jahanbakhsh H, Hosseini P, Gandomi AH (2023) A novel evolutionary learning to prepare sustainable concrete mixtures with supplementary cementitious materials. Environ Dev Sustain 25(7):5831–5865

    Article  Google Scholar 

  11. Naseri H, Jahanbakhsh H, Khezri K, Shirzadi Javid AA (2022) Toward sustainability in optimizing the fly ash concrete mixture ingredients by introducing a new prediction algorithm. Environ Dev Sustain 24(2):2767–2803

    Article  Google Scholar 

  12. Behnood A, Golafshani EM (2018) Predicting the compressive strength of silica fume concrete using hybrid artificial neural network with multi-objective grey wolves. J Clean Prod 202:54–64

    Article  Google Scholar 

  13. Lyngdoh GA, Zaki M, Krishnan NMA, Das S (2022) Prediction of concrete strengths enabled by missing data imputation and interpretable machine learning. Cem Concr Compos 128:104414

    Article  Google Scholar 

  14. Benhelal E, Zahedi G, Shamsaei E, Bahadori A (2013) Global strategies and potentials to curb CO2 emissions in cement industry. J Clean Prod 51:142–161

    Article  Google Scholar 

  15. Shirzadi Javid AA, Naseri H, Etebari Ghasbeh MA (2021) Estimating the optimal mixture design of concrete pavements using a numerical method and meta-heuristic algorithms. Iran. J Sci Technol Trans Civ Eng 45:913–927

    Article  Google Scholar 

  16. Naseri H, Jahanbakhsh H, Hosseini P, Nejad FM (2020) Designing sustainable concrete mixture by developing a new machine learning technique. J Clean Prod 258:120578

    Article  Google Scholar 

  17. Akbarzadeh MR, Ghafourian H, Anvari A, Pourhanasa R, Nehdi ML (2023) Estimating compressive strength of concrete using neural electromagnetic field optimization. Materials (Basel) 16(11):4200

    Article  Google Scholar 

  18. Sedaghat B, Tejani GG, Kumar S (2023) Predict the maximum dry density of soil based on individual and hybrid methods of machine learning. Adv Eng Intell Syst 002(3).

  19. Chou JS, Tsai CF, Pham AD, Lu YH (2014) Machine learning in concrete strength simulations: multi-nation data analytics. Constr Build Mater 73:771–780.

    Article  Google Scholar 

  20. Mahesh B (2020) Machine learning algorithms-a review. Int J Sci Res (IJSR) 9:381–386

    Google Scholar 

  21. Zhou Z-H (2021) Machine learning. Springer Nature, New York City

    Book  Google Scholar 

  22. Wang H, Lei Z, Zhang X, Zhou B, Peng J (2016) Machine learning basics. Deep Learn 98–164

  23. Barkhordari MS, Armaghani DJ, Mohammed AS, Ulrikh DV (2022) Data-driven compressive strength prediction of fly ash concrete using ensemble learner algorithms. Buildings 12(2):132

    Article  Google Scholar 

  24. Naseri H, Jahanbakhsh H, Moghadas Nejad F, Golroo A (2020) Developing a novel machine learning method to predict the compressive strength of fly ash concrete in different ages. AUT J Civ Eng 4(4):423–436

    Google Scholar 

  25. Suthar M (2020) Applying several machine learning approaches for prediction of unconfined compressive strength of stabilized pond ashes. Neural Comput Appl 32(13):9019–9028.

    Article  Google Scholar 

  26. Han Q, Gui C, Xu J, Lacidogna G (2019) A generalized method to predict the compressive strength of high-performance concrete by improved random forest algorithm. Constr Build Mater 226:734–742.

    Article  Google Scholar 

  27. Du P, Samat A, Waske B, Liu S, Li Z (2015) Random forest and rotation forest for fully polarized SAR image classification using polarimetric and spatial features. ISPRS J Photogramm Remote Sens 105:38–53

    Article  Google Scholar 

  28. Lam L, Wong Y, Poon C (1998) effect of fly ash and silica fume on compressive and fracture behaviors of concrete. Cem Concr Res 28(2):271–283.

    Article  Google Scholar 

  29. Biau G, Scornet E (2016) A random forest guided tour. Test 25:197–227

    Article  MathSciNet  Google Scholar 

  30. Sarica A, Cerasa A, Quattrone A (2017) Random forest algorithm for the classification of neuroimaging data in Alzheimer’s disease: a systematic review. Front. Aging Neurosci 9:329

  31. Lin W, Wu Z, Lin L, Wen A, Li J (2017) An ensemble random forest algorithm for insurance big data analysis. IEEE Access 5:16568–16575

    Article  Google Scholar 

  32. Kulkarni A.D, Lowe B (2016) Random forest algorithm for land cover classification

    Google Scholar 

  33. Wang G, Yuan Y, Guo W (2019) An improved rider optimization algorithm for solving engineering optimization problems. IEEE Access 7:80570–80576

    Article  Google Scholar 

  34. Binu D, Kariyappa BS (2018) RideNN: a new rider optimization algorithm-based neural network for fault diagnosis in analog circuits. IEEE Trans Instrum Meas 68(1):2–26

    Article  Google Scholar 

  35. Krishna MM, Panda N, Majhi SK (2021) Solving traveling salesman problem using hybridization of rider optimization and spotted hyena optimization algorithm. Expert Syst. Appl. 183:115353

    Article  Google Scholar 

  36. Hayyolalam V, Kazem AAP (2020) Black widow optimization algorithm: a novel meta-heuristic approach for solving engineering optimization problems. Eng Appl Artif Intell 87:103249

    Article  Google Scholar 

  37. Houssein EH, Helmy BE, Oliva D, Elngar AA, Shaban H (2021) A novel black widow optimization algorithm for multilevel thresholding image segmentation. Expert Syst Appl 167:114159

    Article  Google Scholar 

  38. Memar S, Mahdavi-Meymand A, Sulisz W (2021) Prediction of seasonal maximum wave height for unevenly spaced time series by Black Widow Optimization algorithm. Mar Struct 78:103005

    Article  Google Scholar 

  39. Naruei I, Keynia F (2021) A new optimization method based on COOT bird natural life model. Expert Syst Appl 183:115352

    Article  Google Scholar 

  40. Mostafa R. R, Hussien A. G, Khan M. A, Kadry S, Hashim F. A (2022) Enhanced coot optimization algorithm for dimensionality reduction. 2022 Fifth International Conference of Women in Data Science at Prince Sultan University (WiDS PSU). pp 43–48

    Chapter  Google Scholar 

  41. Wang H-Y et al (2022) Optimal wind energy generation considering climatic variables by Deep Belief network (DBN) model based on modified coot optimization algorithm (MCOA). Sustain Energy Technol Assessments 53:102744

    Article  Google Scholar 

  42. Huang L, Jiang W, Wang Y, Zhu Y, Afzal M (2022) Prediction of long-term compressive strength of concrete with admixtures using hybrid swarm-based algorithms. Smart Struct Syst 29(3):433–444

    Google Scholar 

  43. Cheng H, Kitchen S, Daniels G (2022) Novel hybrid radial based neural network model on predicting the compressive strength of long-term HPC concrete. Adv Eng Intell Syst 1(2)

  44. Chen J (2023) High-performance concrete compressive property prediction via deep hybrid learning. J Intell Fuzzy Syst 45(3):4125–38

    Article  Google Scholar 

  45. Chen L (2022) Hybrid structured artificial network for compressive strength prediction of HPC concrete. J Appl Sci Eng 26(7):989–999

    Google Scholar 

  46. Chen L, Liu F, Wu F (2022) Novel hybrid HGSO optimized supervised machine learning approaches to predict the compressive strength of admixed concrete containing fly ash and micro-silica. Eng Res Express 4(2):025022

  47. He D, Zong-Wei H, Jie X (2022) Flow direction algorithm-based machine learning approaches for the prediction of high-performance concrete strength property. Eng Res Express 4(3):35032

    Article  Google Scholar 

  48. Hu X (2023) Use an adaptive network fuzzy inference system model for estimating the compressive strength of high-performance concrete with two optimizers improved Grey Wolf algorithm and Dragonfly optimization algorithm. Multiscale Multidiscip Model Exp Des 1–14

Download references


Project cost Ningxia Hui Autonomous Region first-class grassroots teaching organization.


No funding.

Author information

Authors and Affiliations



All authors contributed to the study’s conception and design. Data collection, simulation, and analysis were performed by Wenbin Lan.

Corresponding author

Correspondence to Wenbin Lan.

Ethics declarations

Ethics approval and consent to participate

This option is not necessary due to that the data were collected from the references.

Competing interests

The authors declare that they have no competing interests.



Table 6 The test data

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit The Creative Commons Public Domain Dedication waiver ( applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lan, W. Accurate compressive strength prediction using machine learning algorithms and optimization techniques. J. Eng. Appl. Sci. 71, 1 (2024).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: