 Research
 Open access
 Published:
Estimation of mechanical properties of the modified highperformance concrete by novel regression models
Journal of Engineering and Applied Science volume 70, Article number: 157 (2023)
Abstract
Using support vector regression (SVR) analytics, a novel method for evaluating the highperformance concrete (HPC) compressive strength (CS) containing fly ash (FA) and blast furnace slag (BFS) has been developed. Both Salp swarm optimization (SSA) and Grasshoppers optimization algorithm (GOA) were used in this research to look for critical SVR method variables that may be tweaked for better performance. The suggested approaches were created using 1030 trials, eight inputs (the primary component of admixtures, mix designs, curing age, and aggregates), and the CS as the forecasting goal. After that, the findings were compared to those found elsewhere in the literature. Combined SSASVR and GOASVR analysis could work exceptionally well when it comes to estimating, according to the estimation findings. The root means square error (RMSE) value for the GOASVR faces a remarkable increment in comparison with the SSASVR. The comparison resulted that the GOASVR delivered a higher rate of accuracy than any previous published research. At the outset, the developed GOASVR model might be considered a practical predictive system for the CS prediction of HPC admixed with FA and BFS.
Introduction
Highperformance concrete (HPC) is more efficient than conventional concrete (CC) in most cases, depending on the applications. The American Concrete Institute (ACI) states that \(HPC\) is a concrete kind that satisfies specified characteristics for homogeneity and performance. Neither standard mixing or curing methods nor standard components can produce these qualities [1, 2]. There is a possibility that the durability of this concrete will be improved. Maintainance costs are reduced as well as service life is extended. Consider the accessibility and cost of locally available resources while selecting the HPC mix characteristics. As a result, more trial mix quantities and analysis are necessary than in CC [2]. Additional cementitious components and additives are added to CC to make HPC [2,3,4]. The compressive strength of HPC mixtures is critical. Establishing an appropriate estimation system for this critical attribute saves money and time while also allowing for more efficient combination creation. Statistical regression approaches have been used by researchers in this regard [3, 4]. An experimental approach based on regression could have major flaws. There must be a formula in place before a regression analysis can be performed. Another significant limitation of the use of regression models is the necessity for continued regularity [2]. Nevertheless, there is disagreement on the experimental formulae used in codes and standards to establish \(CS\). This is due to these formulae being developed by testing on concrete, which did not involve any components of additional cementitious materials.
Over the last decade, there has been a surge of interest in applying machine learning methods to tackle civil engineering challenges in both commercial and academic settings [5, 6]. Using machine learning methods, in which empirical efforts can create learning, could be highly beneficial in developing computer models [7, 8]. The most extensively utilized method for machine learning is artificial neural networks (ANNs). ANNs are often used to analyze a wide range of concrete qualities [9,10,11,12]. They have been used to forecast the CS and slump flow of HPC mixtures [13, 14]. Employing sequential learning NNs, Rajasekaran and Amalraj [15] and Rajasekaran et al. [16] developed prediction approaches for the power of HPC mixtures. A wavelet NN approach was utilized to analyze the HPC’s \(CS\) by Rajasekaran and Lavanya [17]. Since ANNs are unable to characterize the prediction fundamentals, they are commonly referred to as black box systems. Despite their efficiency, since ANNs usually fails to give a good forecasting model, they are still considered as a black box. Various authors have forecasted the mechanical features of HPC in previous years utilizing NNs and other AI approaches. Gradientboosted ANN and Bagged ANN were employed by Erdal et al. [18] to simulate the HPC’s CS. When compared to previous trials, Chou and Pham’s ensemble approaches [19] fared well. So long as Erdal [20] built two connected ensemble decision trees, Cheng et al. [21] evaluated HPC CS using a tree. According to the previous study, AI techniques are more effective than traditional methods for precisely and quickly measuring the CS of HPCs [22, 23]. Rafiei et al. [24] propose the introduction of a unique deep machine as a replacement for backpropagation NN and SVR. This is done to determine the properties of concrete on the basis of the mixture percentages. Considering real testing results, they assess the 98% precise. Nguyen et al. [25] used a deep NN framework to determine the foamed concrete strength. Rafiei et al. [26] used an optimization algorithm to uniquely handle the optimal values in the concrete mix. Genetic programming is another machine learning approach (GP) [27]. The criteria of natural genetic advancement lead GP to generate computer simulations on its own. In recent years, classical GP and its modifications have been used to deliver easier answers to civil engineering problems [28]. Mousavi et al. [2] combine GP and orthogonal minimum squares approaches to estimate the CS of HPC combinations. When creating CS, the amount of fine/coarse aggregate, the percentage of superplasticizer to samples’ age, and the binder were all considered. Gene expression programming is a more modern development of GP (GEP) [29]. The GEP approach may be used as a dependable and effective substitute for traditional GP. Some study has been conducted in an attempt to use GEP in civil engineering [30,31,32,33].
Several research have proposed alternative models for predicting the CS of HPCs, including the hybrid adaptive neurofuzzy inference system (ANFIS) with arithmetic optimization algorithm (AOA) and equilibrium optimizer (EO). The findings suggest that the integrated systems exhibited robust estimate skills, as shown by \({R}^{2}\) values of 0.9941 and 0.9975 for the training and testing phases, respectively [34]. Four deep learning algorithms were offered in different research, which was unusual in the literature. When using deep learning models, the aforesaid \({R}^{2}\) value was approximately 0.960 during training and almost reached 0.940 after testing [35]. In research, a unique hybrid model combining artificial bee colony (ABC) optimization and cascade forward neural network (CFNN) technology was created for the CS prediction of HPC. The created model (CFNNABC) was able to correctly estimate the compressive strength of HPC with an \({R}^{2}\) of 0.953, according to the results, and two layers is the ideal neural network architecture chosen by the \(ABC\) approach [36]. An incredibly accurate machine learning (\(ML\)) model was trained using the innovative eXtreme Gradient Boosting (\(XGB\)) approach. The baseline model tends to overfit, with values of 0.996 and 0.919 for the training and testing datasets, respectively [37].
The main objective of the current research is to supply a practical way to evaluate the effectiveness of intelligent machines in calculating HPC’s CS via stringent testing. Using the SVR technique, we attempted to construct models that might predict HPC’s qualities while it had fresh and hardened properties. It used the grasshoppers’ optimization algorithm (GOA) and salp swarm optimization (SSA) algorithms in this work to pinpoint the most critical components of SVR that need to be changed. The created techniques were assessed using 1030 experiments, eight input parameters, such as concrete age, admixtures, the major constituent of mixes, and the compressive strength as the forecasting objective. The findings were then compared to other studies in the field [19, 28, 38,39,40,41,42].
Developed models offer several advantages compared to traditional mixed design methods in the field of materials science and civil engineering. These advantages stem from ML’s ability to analyze large datasets, discover complex patterns, and make predictions based on learned patterns. Algorithms can analyze vast amounts of data to identify subtle relationships between various material properties and mix design outcomes. This can lead to mixed designs that are more accurate and precise, reducing the risk of defects and failures. Developed models can automate many aspects of mix design, such as optimizing the proportions of materials and predicting the properties of the resulting mixture. Models can adapt to different materials, conditions, and project requirements. Traditional mixed design methods often rely on simplified assumptions and linear models. Algorithms, on the other hand, can handle complex, nonlinear relationships between material properties and mix performance, leading to more accurate predictions. ML can help optimize mix designs to use materials more efficiently, minimizing waste and reducing costs associated with the overuse or underuse of materials. MLbased mix design can help reduce the cost of materials and construction by optimizing the use of resources, improving performance, and minimizing the need for expensive adjustments or revisions.
Methods
Dataset preparation and description
Over a thousand HPC sample datasets were investigated in this research [4, 43,44,45,46]. Regular Portland cement was used to build all specimens, which were then permitted time to dry out. Several different types and sizes of samples have been used in the \(HPC\) data that has been released so far. In this study, the \(HPC CS\) is based on the following eight factors:

(1)
C Contents of cement

(2)
BFS Blast furnace slag

(3)
FA Fly ash

(4)
W Water

(5)
SP Superplasticizer

(6)
CA Coarse aggregate

(7)
FA Fine aggregate

(8)
AC \(HPC\) age
Figure 1 shows the training and evaluation datasets’ charts with their Lognormal distribution, and Table 1 shows the collection range for these items. When it came time to divide up everything that was needed, 70% of the 1030 data was used for learning and the remainder for testing. Subgroups from the original dataset were selected at random using a uniform distribution for training and testing. The 70:30 train/test ratio was chosen for this investigation even though many other train/test ratios were able to be used, as suggested in previous studies [47, 48]. In order to show that the choice of these inputs was enough, a statistical study was carried out. As a consequence, no substantial cross in the eightdimensional input space was found [45, 49], which is needed to train AI networks with good generalization abilities.
Considered techniques
Grasshoppers optimization algorithm (GOA)
Metaheuristic methods are built on simulating nature (Fig. 2). Universal optimization issues are typically solved using these methods. Different categories of metaheuristic methods include those based on evolution, physical and chemical, swarm intelligence, and humanbased methods. We optimized using GOA in this research. Saremi et al. [50] introduce GOA to the research. The efficient metaheuristic method GOA uses optimization based on a swarm, which was motivated by natural processes. Based on this study, GOA is utilized to determine the input variables’ ideal values (ideal factor rates). The optimization process is then applied to GOA using these models. As a consequence, we employed GOA to look for these regression models’ ideal factor rates.
GOA imitates the organic manners of grasshopper swarms. The two steps of natureinspired optimization methods are exploitation and exploration. Sudden movements throughout the exploration are made by the search agents of the optimization method. Although they migrate more locally throughout exploitation. The following formulae represent the manners of the grasshopper and the theory of optimization search [50]:
In this equation, \(i\) stands for every grasshopper and \({X}_{i}\) indicates where the \({i}_{th}\) grasshopper is located. \({S}_{i}\) is a symbol of how grasshoppers connect. Similarly, to this, \({G}_{i}\) and \({A}_{i}\) stand in for gravity power and wind advection. The random numbers in the range [0, 1] are the \(r\) variables. Equation (2) [50] provides information on the grasshoppers' social manner (attraction–repulsion):
Based on this equation, \(s\) stands for the power of social forces \(\left({s}_{r}={fe}^{r/1}{e}^{r}\right)\), \(l\) stands for the seductive interval scale, and \(f\) for attraction’s potency [50]. \(N\) shows the grasshoppers’ number. A vector among both two grasshoppers is called \({d}_{ij}\) \(\left({d}_{ij}=\left{x}_{j}{x}_{i}\right\right)\), and \(\widehat{{d}_{ij}}\) represents the exact interval among the ith and jth grasshopper \(\left(\widehat{{d}_{ij}}=\left({x}_{j}{x}_{i}\right)/{d}_{ij}\right)\). Artificial grasshoppers’ social interactions are influenced by the \(s\) function. This function divides the interval among every duo of grasshoppers into three pieces (attraction region, repulsion region, and comfort zone). In their investigation of intervals from 0 to 15, Saremi et al. [50] found repulsion among [0 2.079]. They proposed calculating the safe interval as the separation of two artificial grasshoppers by 2.079 units. The safe area is not where attraction or repulsion operate. This area is altered by the \(f\) and \(l\). However, the s function turns to zero if this interval exceeds 10. Consequently, this function cannot generate powerful pressures throughout long intervals among grasshoppers. Another element of \({X}_{i}\) is \({G}_{i}\) (gravitational power) [50]:
In Eq. (3), \(g\) stands for the gravity constant and \(\widehat{{e}_{g}}\) for the unity vector pointing toward the middle of the earth. The last element of \({X}_{i}\) is \({A}_{i}\) (wind advection):
According to this equation, respectively, \(\widehat{{e}_{w}}\) and \(u\) represent a unity vector and constant drift in the direction of the wind. Traditional swarmbased methods imitate the swarm as it explores and uses the search area all over an answer. The GOA model of \({X}_{i}\) replicates the interactions of a swarm of grasshoppers since the mathematical formulas are in a free area. It emulates how a grasshopper might act in multiple spatial dimensions, including 2D, 3D, and hyperdimensional areas [50].
The lower and upper boundaries in the \({D}_{th}\) dimension \({s}_{r}\) are known as \({lb}_{d}\) and \({ub}_{d}\). The comfort, repulsion, and attraction areas are reduced by the lowering coefficient \(c\), which makes \(\widehat{{T}_{d}}\) the optimal (target) answer. Every search unit in GOA has a onelocation vector, which calculates every search unit’s next location. Equation (5)’s first step, the summation, replicates grasshopper interaction by considering the locations of several grasshoppers. \(\widehat{{T}_{d}}\) reflects their propensity to migrate in the direction of food resources. Eventually, \(c\) simulates the grasshoppers’ slowing as they reach the food resource in Eq. (6) [50].
In this equation, the present iteration and the iterations’ highest number, \(L\) and \(l\). The lowest and highest values are denoted \(c\mathrm{min}\) and \(c\mathrm{max}\) [50]. We utilized the same settings as Saremi et al. [50], who utilized \(c\mathrm{max} = 1\) and \(c\mathrm{min} = 0.00001\). In summation, the swarm eventually approaches a fixed goal as the safe area is reduced by the \(c\) variable. The swarm also successfully pursues a moving goal by \(\widehat{{T}_{d}}\). The grasshoppers will approach the objective through many repetitions. Algorithm 1 below displays the \(GOA\) pseudocode [50].
Salp swarm optimization (\({\varvec{S}}{\varvec{S}}{\varvec{A}}\))
As presented in Fig. 3, salps move highly alluringly, often traveling in a cooperative chain as they seek food in the oceans and seas. This characteristic allows the swarm to move more fluidly and with greater kinetic energy while they search for food [52]. A single leader often leads the salps chain followers. A \(d\)dimensional matrix describes a group chain \(X\) with \(n\) salps.
A focused food establishment is also represented by \(F\). Following are the modified positions for a leader \(\left({x}_{j}^{1}\right)\):
Based on this equation, \({k}_{2}\) and \({k}_{3}\) are two randomly generated numbers in the range [0, 1], and \({U}_{b}\) and \({L}_{b}\) stand for the top and low boundaries. According to the following equation, \({k}_{1}\) maintains an equilibrium between exploitation and exploration:
In Eq. (9), the iterations \(n\) and \(N\) stand for the exact and highest iterations. The remaining (or followers) salps’ location is determined by the formula below:
In Algorithm 2, the fundamental \(SSA\) pseudo code is described.
Support vector regression (\({\varvec{S}}{\varvec{V}}{\varvec{R}}\))
The statistical learning theory and structural risk minimization concept are the foundations of the nonlinear regression method known as SVR [53]. The core of this method is a nonlinear conversion technique (kernel functions) that converts the initial input area into a novel hyperspace. The complex and nonlinear interactions among input and outcome parameters in transformed hyperspace are described by a linear function [53, 54]. SVR is the procedure of identifying the function \(f(x)\) that is as flat as feasible and has a maximum deflection \(\varepsilon\) from the training samples \(\left({x}_{i},{y}_{i}\right)\) for \(i=1,\dots ,N\). The complexity of the model may be reduced by maximizing the function’s flatness, which affects the model’s overall performance. In fact, according to the learning theory [55], the generalization error may be limited by the sum of two factors, one of which depends on the model’s complexity and the other of which depends on the fault in the training data. The foundation of SVR techniques is the management of model complexity throughout training.
The procedure is initially explained for a linear function \(f(x)\) of the following type:
Based on this equation, \(x\),\(w\), and \(b\) show the input vector, the parameters’ vector (or weight), and a constant to be specified, respectively. The data is mapped onto a greater dimensional area utilizing a nonlinear kernel in the context of nonlinear issues:
In this equation, the kernel function is shown by \(\varphi (x)\).
Data may be mapped onto a greater dimensional characteristic area and can also be used to apply a linear regression method. The w and b may be gained by minimizing the given function:
Subjected to:
Based on this equation, \({\xi }_{i}^{*}\) and \({\xi }_{i}\) show the negative and positive faults as presented in Fig. 4. A hyperparameter allowing for adjustment of the balance among the amount of fault permitted and the flatness of the function \(f(x)\) is the constant \(C>0\). The penalty factor \(C\) and the kernel function variable (\(\sigma\)) control the generalization and fitting capabilities of the \(SVR\) model, respectively. Selecting the optimal \(C\) and \(\sigma\) may assist in avoiding under or overfitting while also improving the \(SVR\) model’s estimating efficiency.
Metrics
The formulations in Eqs. 14–17 depict the metrics values for assessing the models’ robustness and comparing with the literature to choose the outperformed model.
1) Coefficient of determination (\({R}^{2}\))
2) Rootmeansquare error (\(RMSE\))
3) Mean absolute error (\(MAE\))
\({4) A}_{20\mathrm{Index}}\)
In these equations: \({m}_{d}\) defines the records, \(\overline{m }\) the mean values of records, \({z}_{d}\) the predicted values, \(\overline{z }\) the mean values of predictions, and \(D\) is the number of a dataset. \(M\) is the number of samples, and \({m}_{10}\) is the number of samples with a recorded/predicted proportion in [0.8, 1.2].
Results and discussion
The conclusions of the GOASVR and SSASVR architectures are provided to forecast the CS of the HPC augmented with BFS and FA. As stated in the previous part, the productivity of SVR is determined by choosing the proper values for the mixture of the key SVR sections. The reported and computed values of the CS of HPC throughout the training and testing stages for the GOASVR and SSASVR simulations are presented in Fig. 5. Along with the time series plots, a residual CS concentration graph with a normally distributed curves around the zero line is given. \({R}^{2},\) RMSE, MAE, and \({A}_{20Index}\) were calculated to assess the workability of the GOASVR and SSASVR (see Table 2). Both the GOASVR and SSASVR techniques show great potential for properly forecasting HPC CS.
This section of the research compares the findings of the statistical identifiers developed for the generated investigations, known as GOASVR and SSASVR, to see whether one version performs better than others. Furthermore, an attempt has been undertaken to appropriately compare the investigation’s findings to previously published ones. The findings reveal that the combined GOASVR and SSASVR frameworks could perform wonderfully well during predicting, with \({R}^{2}\) values of 0.9695 and 0.9688 for GOASVR and 0.9438 and 0.9161 for SSASVR for the training and testing sections, respectively. However, analyzing and evaluating the generated signals is necessary to choose the appropriate technique. When compared to the SSASVR, the GOASVR RMSE’s value decreased somewhat in the training portion from 3.6962 to 2.4815 MPa. A stunning drop from 4.0861 to 2.3906 MPa was achieved in the testing stage. Another criterion, MAE, yielded similar findings to RMSE and shows that the GOASVR has a better ability for CS prediction at \({MAE}_{\mathrm{Train}}\)= 2.124 and \({MAE}_{\mathrm{Test}}\)= 1.5049 because its values were smaller than those of the SSASVR at \({MAE}_{\mathrm{Train}}\)= 3.476 and \({MAE}_{\mathrm{Test}}\)= 3.2734. A similar pattern might be found in the \({A}_{20\mathrm{Index}}\) indicator, which showed a 14 percent increase in the train portion and a 16 percent increase in the test section for GOASVR.
The computations in this article were compared with [19, 28, 38,39,40,41,42] in order to give a full review and validation. Where different types of techniques used like Gene Expression Programming (GEP) [38], Semiempirical method (SEM) [40], Gaussian process regression (GPR) [41, 39], Extremegradientboosting (XGBoost) [42], Artificial neural networks (ANNs) [19], and Multigene genetic programming (MultiGGP) [28]. As shown in Table [38], for example, exhibited a little lower \({R}^{2}\) and a much higher MAE than GOASVR by 0.8224 and 5.202, respectively. SEM [40], as a new technique, performed worse than GOASVR, with \({R}^{2}\) at 0.84 significantly lower than 0.9695, RMSE at 6.3 lower than 3.1108 (a drop of greater than 50%), \(MAE\) at 4.91 greater than 2.124, and \({A}_{20\mathrm{Index}}\) at 0.68 lower than 0.9417. Moreover, the GOASVR beat the GPR [41] watching to \({R}^{2}\), RMSE, and MAE. Another GPR [39] depicted marginally better in the training phase compared to this article, but it was terribly poor in the testing section. Other approaches, such as MultiGGP [28] and ANNs [19] performed weakly than GOASVR, with \({R}^{2}\) values of 0.8046 and 0.8469, respectively, which are interestingly less than 0.9688. Finally, the most recently developed technology, XGBoost [42], got close, although it was also weaker than GOASVR. Finally, the suggested model is the GOASVR framework created for simulating the CS of HPC with FA and BFS.
Conclusions
The main objective of the current research is to supply a practical way to evaluate the effectiveness of Intelligent machines in calculating highperformance concrete (HPC)’s compressive strength (CS) via stringent testing. Using the support vector regression (SVR) technique, we attempted to construct models that might predict HPC’s qualities while it was fresh and hardened properties. It used the grasshoppers’ optimization algorithm (GOA) and salp swarm optimization (SSA) algorithms in this work to pinpoint the most critical components of SVR that need to be changed. The created techniques were assessed by utilizing 1030 experiments, eight input parameters, like admixtures, concrete age, and the major constituent of mixes, as well as the CS as the forecasting objective. The findings were then compared to other studies in the field.

The findings reveal that the combined GOASVR and SSASVR frameworks could perform wonderfully well during predicting, with \({R}^{2}\) values of 0.9695 and 0.9688 for GOASVR and 0.9438 and 0.9161 for SSASVR for the train and test sections, respectively.

When compared to the SSASVR, the GOASVR RMSE’s value decreased somewhat in the training portion from 3.6962 M to 2.4815 MPa. A stunning drop from 4.0861 to 2.3906 MPa was achieved in the testing stage. Another criterion, MAE, yielded similar findings to RMSE and shows that the GOASVR has a better ability for CS prediction at \({MAE}_{\mathrm{Train}}\)= 2.124 and \({MAE}_{\mathrm{Test}}\)= 1.5049 because its values were smaller than those of the SSASVR at \({MAE}_{\mathrm{Train}}\)= 3.476 and \({MAE}_{\mathrm{Test}}\)= 3.2734. A similar pattern might be found in the \({A}_{20\mathrm{Index}}\) indicator, which showed a 14 percent increase in the train portion and a 16 percent increase in the test section for GOASVR.

As shown, the suggested GOASVR performed the best compared to the literature. GEP [38] exhibited lower \({R}^{2}\) and a much higher MAE than GOASVR by 0.8224 and 5.202, respectively. SEM [40] performed worse than GOASVR, with \({R}^{2}\) at 0.84, significantly lower than 0.9695, RMSE with a drop of greater than 50%, and \({A}_{20\mathrm{Index}}\) at 0.68 lower than 0.9417. Moreover, the GOASVR beat the GPR [41], watching to \({R}^{2}\), RMSE, and MAE. Other approaches, such as ANNs [19] and MultiGGP [28] performed weakly than GOASVR, with \({R}^{2}\) values of 0.8469 and 0.8046, respectively, which are interestingly less than 0.9688. Finally, the most recently developed technology, XGBoost [42], got close, although it was also weaker than GOASVR.

Finally, the suggested model is the GOASVR framework created for simulating the CS of HPC updated with FA and BFS.
Availability of data and material
The authors do not have permissions to share data.
Abbreviations
 HPC:

Highperformance concrete
 CS:

Compressive strength
 ANN:

Artificial neural networks
 SVR :

Support vector regression
 GOA:

Grasshoppers’ optimization algorithm
 ML:

Machine learning
 SSA :

Salp swarm algorithm
 RMSE:

Root mean square error
 MAE:

Mean absolute error
 SP:

Superplasticizer
 R^{2} :

Coefficient of correlation
 AC:

Age of concrete
 FA:

Fly ash
 FA:

Fine aggregate
 BFS:

Blast furnace slag
 W:

Water
 C:

Cement
 Ca:

Coarse aggregate
References
Cook RA, Goodspeed C, Vanicar S (1998) HighPerformance Concrete Defined for Highway Structures. Federal Highway Administration, United States
Mousavi SM, Gandomi AH, Alavi AH et al (2010) Modeling of compressive strength of HPC mixes using a combined algorithm of genetic programming and orthogonal least squares. Structural engineering and mechanics 36:225–241
Domone PLJ, Soutsos MN (1994) Approach to the proportioning of highstrength concrete mixes. Concr Int 16:26–31
Yeh IC (1998) Modeling of strength of highperformance concrete using artificial neural networks. Cem Concr Res 28:1797–1808
Sarkhani Benemaran R, EsmaeiliFalak M, Javadi A. Predicting resilient modulus of flexible pavement foundation using extreme gradient boosting based optimised models. International Journal of Pavement Engineering. 2022;1–20
Benemaran RS (2023) Application of extreme gradient boosting method for evaluating the properties of episodic failure of borehole breakout. Geoenergy Science and Engineering 226:211837
EsmaeiliFalak M, Benemaran RS (2023) Ensemble deep learningbased models to predict the resilient modulus of modified base materials subjected to wetdry cycles. Geomechanics and Engineering 401:132833
Sarkhani Benemaran R, EsmaeiliFalak M (2023) Predicting the Young’s modulus of frozen sand using machine learning approaches: Stateoftheart review. Geomechanics and Engineering 34:507–527
Basma AA, Barakat SA, AlOraimi S (1999) Prediction of cement degree of hydration using artificial neural networks. ACI Mater J 96:167–172
Ji T, Lin T, Lin X (2006) A concrete mix proportion design algorithm based on artificial neural networks. Cem Concr Res 36:1399–1408
Lee SC (2003) Prediction of concrete strength using artificial neural networks. Eng Struct 25:849–857
Yeh IC (2007) Modeling slump flow of concrete using secondorder regressions and artificial neural networks. Cem Concr Compos 29:474–480
Kasperkiewicz J, Racz J, Dubrawski A (1995) HPC strength prediction using artificial neural network. J Comput Civ Eng 9:279–284
Prasad BKR, Eskandari H, Reddy BVV (2009) Prediction of compressive strength of SCC and HPC with high volume fly ash using ANN. Constr Build Mater 23:117–128
Rajasekaran S, Amalraj R (2002) Predictions of design parameters in civil engineering problems using SLNN with a single hidden RBF neuron. Comput Struct 80:2495–2505
Rajasekaran S, Suresh D, Vijayalakshmi Pai GA (2002) Application of sequential learning neural networks to civil engineering modeling problems. Eng Comput 18:138–147
Rajasekaran S, Lavanya S (2007) Hybridization of genetic algorithm with immune system for optimization problems in structural engineering. Struct Multidiscip Optim 34:415–429
Erdal HI, Karakurt O, Namli E (2013) High performance concrete compressive strength forecasting using ensemble models based on discrete wavelet transform. Eng Appl Artif Intell 26:1246–1254
Chou JS, Pham AD (2013) Enhanced artificial intelligence for ensemble approach to predicting high performance concrete compressive strength. Constr Build Mater 49:554–563
Erdal HI (2013) Twolevel and hybrid ensembles of decision trees for high performance concrete compressive strength prediction. Eng Appl Artif Intell 26:1689–1697
Cheng MY, Firdausi PM, Prayogo D (2014) Highperformance concrete compressive strength prediction using Genetic Weighted Pyramid Operation Tree (GWPOT). Eng Appl Artif Intell 29:104–113
Kaloop MR, Kumar D, Samui P et al (2020) Compressive strength prediction of highperformance concrete using gradient tree boosting machine. Constr Build Mater 264:120198
Asteris PG, Roussis PC, Douvika MG (2017) Feedforward neural network prediction of the mechanical properties of sandcrete materials. Sensors 17:1344
Rafiei MH, Khushefati WH, Demirboga R et al (2017) Supervised deep restricted Boltzmann machine for estimation of concrete. ACI Mater J 114:237
Nguyen T, Kashani A, Ngo T et al (2019) Deep neural network with highorder neuron for the prediction of foamed concrete strength. ComputerAided Civil and Infrastructure Engineering 34:316–332
Rafiei MH, Khushefati WH, Demirboga R, et al (2017) Novel Approach for Concrete Mixture Design Using Neural Dynamics Model and Virtual Lab Concept. ACI Mater J 114:117–127
Angeline PJ (1992) Genetic programming: On the programming of computers by means of natural selection: John R. A Bradford Book, MIT Press, Cambridge MA, Koza (0262111705, xiv+ 819 pp., US $55.00. Elsevier; 1994)
Gandomi AH, Alavi AH (2012) A new multigene genetic programming approach to nonlinear system modeling. Part I: materials and structural engineering problems. Neural Comput Appl. 21:171–187
Ferreira C (2001) Gene expression programming: a new adaptive algorithm for solving problems. arXiv preprint cs/0102027 13(2):87–129
Alavi AH, Gandomi AH (2011) A robust data mining approach for formulation of geotechnical engineering systems. Eng Comput (Swansea) 28(3):242–74
Gandomi AH, Alavi AH, Mirzahosseini MR et al (2011) Nonlinear geneticbased models for prediction of flow number of asphalt mixtures. J Mater Civ Eng 23:248–263
Baykasoğlu A, Güllü H, Çanakçı H et al (2008) Prediction of compressive and tensile strength of limestone via genetic programming. Expert Syst Appl 35:111–123
Cevik A, Cabalar AF (2009) Modelling damping ratio and shear modulus of sand–mica mixtures using genetic programming. Expert Syst Appl 36:7749–7757
Niu Z, Yuan Y, Sun J (2023) Neurofuzzy system development to estimate the compressive strength of improved highperformance concrete. Multiscale and Multidisciplinary Modeling, Experiments and Design 1–15
Islam N, Kashem A, Das P, et al (2023) Prediction of highperformance concrete compressive strength using deep learning techniques. Asian Journal of Civil Engineering 1–15
Imran M, Khushnood RA, Fawad M (2023) A hybrid datadriven and metaheuristic optimization approach for the compressive strength prediction of highperformance concrete. Case Studies in Construction Materials 18:e01890
Khan MI, Abbas YM (2023) Robust extreme gradient boosting regression model for compressive strength prediction of blast furnace slag and fly ash concrete. Mater Today Commun 35:105793
Mousavi SM, Aminian P, Gandomi AH et al (2012) A new predictive model for compressive strength of HPC using gene expression programming. Adv Eng Softw 45:105–114
Asteris PG, Skentou AD, Bardhan A et al (2021) Predicting concrete compressive strength using hybrid ensembling of surrogate machine learning models. Cem Concr Res 145:106449
Nguyen NH, Vo TP, Lee S et al (2021) Heuristic algorithmbased semiempirical formulas for estimating the compressive strength of the normal and high performance concrete. Constr Build Mater 304:124467
Van DD, Adeli H, Ly HB et al (2020) A sensitivity and robustness analysis of GPR and ANN for highperformance concrete compressive strength prediction using a Monte Carlo simulation. Sustainability 12:830
Lee S, Nguyen N, Karamanli A, et al (2022) Super learner machine‐learning algorithms for compressive strength prediction of high performance concrete. Structural Concrete 24(2):2208–2228.
Yeh IC (1999) Design of highperformance concrete mixture using neural networks and nonlinear programming. J Comput Civ Eng 13:36–42
Yeh IC (2003) Prediction of strength of fly ash and slag concrete by the use of artificial neural networks. J Chin Inst Civil Hydraul Eng 15:659–663
Yeh IC (2006) Analysis of strength of concrete using design of experiments and neural networks. J Mater Civ Eng 18:597–604
Yeh IC (1998) Modeling Concrete Strength with AugmentNeuron Networks. J Mater Civ Eng 10:263–268
Leema N, Nehemiah HK, Kannan A (2016) Neural network classifier optimization using differential evolution with global information and back propagation algorithm for clinical datasets. Appl Soft Comput 49:834–844
Khorsheed MS, AlThubaity AO (2013) Comparative evaluation of text classification techniques using a large diverse Arabic dataset. Lang Resour Eval 47:513–538
Shi X, Yu X, EsmaeiliFalak M (2023) Improved arithmetic optimization algorithm and its application to carbon fiber reinforced polymersteel bond strength estimation. Compos Struct 306:116599
Saremi S, Mirjalili S, Lewis A (2017) Grasshopper optimisation algorithm: theory and application. Adv Eng Softw 105:30–47
Aljarah I, AlZoubi A, Faris H et al (2018) Simultaneous feature selection and support vector machine optimization using the grasshopper optimization algorithm. Cognit Comput 10:478–495
Mirjalili S, Gandomi AH, Mirjalili SZ et al (2017) Salp Swarm Algorithm: A bioinspired optimizer for engineering design problems. Adv Eng Softw 114:163–191
Smola AJ, Schölkopf B (2004) A tutorial on support vector regression, statist. Comput 14:199–222
Masoumi F, NajjarGhabel S, Safarzadeh A et al (2020) Automatic calibration of the groundwater simulation model with high parameter dimensionality using sequential uncertainty fitting approach. Water Supply 20:3487–3501
Aghayari Hir M, Zaheri M, Rahimzadeh N (2022) Prediction of Rural Travel Demand by Spatial Regression and Artificial Neural Network Methods (Tabriz County). Journal of Transportation Research 20(4);367–386
A. M. Andrew (2000) An Introduction to Support Vector Machines and Other KernelBased Learning Methods by Nello Christianini and John ShaweTaylor, Cambridge University Press, Cambridge 18(6):687–689
Acknowledgements
I would like to take this opportunity to acknowledge that there are no individuals or organizations that require acknowledgment for their contributions to this work.
Funding
No funding was obtained for this study.
Author information
Authors and Affiliations
Contributions
LJ activated in methodology, software, validation, and formal analysis section. WJ performed formal analysis, methodology, validation, and language review. YS handled writing—original draft preparation, conceptualization, supervision, and project administration. All authors have read and approved the manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
About this article
Cite this article
Jingtao, L., Jing, W. & Suyuan, Y. Estimation of mechanical properties of the modified highperformance concrete by novel regression models. J. Eng. Appl. Sci. 70, 157 (2023). https://doi.org/10.1186/s44147023003172
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s44147023003172