1. | Cover-Contents Pamukkale Üniversitesi Mühendislik Bilimleri Dergisi Pages I - V |
2. | An application for forecasting the number of applications to the emergency department with time series analysis and machine learning methods Sema Çiftçi, Gül Didem Batur Sir doi: 10.5505/pajes.2022.18488 Pages 667 - 679 Today, the demands for emergency health services show an extraordinary increase in cases such as epidemics, earthquakes, natural disasters, and explosions. The accurate estimation of the demand in question will facilitate the crisis management process for extraordinary situations, as it will enable the determination of the number of people who will apply to the emergency services and the effective realization of the relevant resource planning. In this study, it is aimed to estimate the number of applications to an emergency department. For the seasonal data, SARIMA, Holt-Winters and decomposition, which are among the time series analysis methods; Random tree and random forest techniques from machine learning methods are used. For this forecasting study, 396-day "number of patients admitted" data of a hospital located in Ankara is used. Forecasts in each method are performed for seven, fifteen, and thirty days. Correlation corrected square root and average absolute percentage error values are used to determine the most successful one among demand forecasting methods. In the analyzes made, it is observed that SARIMA method gives more effective results than others in forecasting the number of applications to the emergency department. In addition, because of the constantly changing and dynamic nature of the applications made to the emergency services, it is understood that the change in the forecasted number of days has a significant effect on the resulting forecast values. |
3. | Hypersoft game theory models and their applications in multi-criteria decision making Somen Debnath, Hüseyin Kamacı doi: 10.5505/pajes.2023.98340 Pages 680 - 691 The classical game theory has been extended for soft set structures, and thus, soft game theory, fuzzy soft game theory, intuitionistic fuzzy soft game theory, neutrosophic soft game theory have been introduced. The payoff function in the soft game approaches is the set-valued function and allows the use of set operations to obtain solution, which makes it very convenient and easily applicable in practice. Also, in these game approaches, the strategies can be determined as attributes/parameters. That is, all these soft game theories are designed to manipulate parametric information using a single-attribute function. However, another powerful tool is needed to process parametric information obtained using multi-attribute function. To model such problems mathematically, the concept of hypersoft set has proposed. In this paper, a game theory model based on hypersoft set called hypersoft game theory is constructed. In this game theory, payoff function is the set-valued function and the strategies are chosen as multi-attributes. A two-person hypersoft game is developed and different solution methods (such as hypersoft saddle point method, hypersoft elimination method, hypersoft Nash equilibrium method) are produced for such games. Also, the proposed methods are successfully applied to game theory-based decision making problems that may be encountered in real life. Finally, the two-person hypersoft game is extended to the n-person hypersoft game. Nash equilibrium of an n-person hypersoft game is described and an application for this solution method is presented. |
4. | Adaptive droop controller design for energy management system in DC microgrid architectures Ahmet Kaysal, Selim Köroğlu, Yüksel Oğuz doi: 10.5505/pajes.2023.09455 Pages 692 - 700 In this study, the hierarchical two-level energy management system with an adaptive droop control approach is proposed for the microgrid system consisting of distributed generation units. The primary control layer controls the converters to transfer power from the distributed generation units as part of the hierarchical two-level control structure. The secondary control layer is used to improve current sharing accuracy and to provide DC bus voltage restoration. The DC microgrid system is analyzed using Thévenin's equivalent model and Kirchhoff's laws. The distribution generation units' current sharing parameters are obtained depending on the analysis results. The proposed adaptive droop control method is compared to the conventional droop control method. In the proposed system, the performance of the adaptive droop controller at 2250 W, where the demand is highest, is 9.65% better than the conventional method. Similarly, the proposed method performed 6.67% better in voltage regulation of the DC bus. The simulation results show that the designed control strategy increases the voltage restoration for the DC microgrid under variable operating conditions and provides better power sharing between the sources. Thus, the constraints of the conventional droop control method are improved by using the adaptive method. |
5. | Designing and optimizing a hybrid microgrid supplying various electric vehicle charging modes Farhia Abdullahi Mohamud, İpek Çetinbaş, Mehmet Demirtaş, Hasan Huseyin Erkaya doi: 10.5505/pajes.2023.68745 Pages 701 - 710 A microgrid system has been designed and optimized via hybrid optimization of multiple energy resources (HOMER) software for the electricity needs of Eskisehir Osmangazi University (ESOGU). The microgrid consists of photovoltaic (PV) and wind turbine (WT) units with utility grid connection and battery energy storage system (BESS). The microgrid also supplies energy to electric vehicle (EV) charging units. The combination of EVs and renewable microgrids make a major contribution to the clean energy transition of the global energy system. It also minimizes the cost of energy (COE) and EV charging. Before analyzing the EV loads, a proper hybrid combination of the microgrid was optimized. The comparison and analysis of the proposed systems were performed according to their reliability, economic and environmental impact. The financial analysis of the winning microgrid system was carried out and its performance assessed throughout the duration of the project. Deferrable and on-demand EV charging modes were proposed in this study. To see the impact of EVs on the microgrid, various scenarios were created and applied to the design. The aim was to find a suitable charging method that uses mostly the renewable energy sources to minimize the cost of electricity while protecting the environment. This study demonstrates the importance of smart charging and how it manages charging sessions by leveraging renewable resources and charging only when electricity is at the lowest cost. The result also shows that the cost of the electricity bill of the investigated area is reduced by 36% by using the hybrid microgrid. |
6. | Performance analysis and comparison of gray wolf optimization and Krill herd optimization algorithm Emine Baş, Ayşegül İhsan doi: 10.5505/pajes.2023.38739 Pages 711 - 736 Herding behavior is defined as a group of animals of similar size that migrate in the same direction and hunt together. Gray wolves are usually seen in packs. Each gray wolf in the herd has a distinct duty and a distinct name that reflects the task. Krill swarms form the basis of ocean ecology. There are two reasons for the movement of the Krill herd. The first reason is that difficult for other organisms to prey on Krill living in herds. Another reason is that Krill move in large herds and readily grab their prey. Gray Wolf Optimization (GWO) is inspired by gray wolf herding behavior, while Krill Herd Optimization (KHO) is based on krill herding. In this study, GWO and KHO algorithms are examined in detail and it is decided whether they had sufficient success. The fact that the GWO and KHO algorithms are swarm-based is accepted as a common feature of the two algorithms. However, compared with GWO and KHO analysis, as well as 23 single-mode, multi-modal, and fixed-size multimodal benchmarking optimization tests. In another hand, the success of the algorithms has been demonstrated by running them on various dimensions ({10, 20, 30, 50, 100, 500}). Additionally, the performances of the GWO and KHO are compared with Tree Seed Algorithm (TSA), Particle Swarm Algorithm (PSO), Jaya algorithm, Arithmetic Optimization Algorithm (AOA), Evolutionary Mating Algorithm (EMA), Fire Hawk Optimizer (FHO), Honey Badger Algorithm (HBA) algorithms. Moreover, all of the analyses are obtained in detail, complete with statistical tests and figures. As a result, while GWO and KHO algorithms show superior success in different test problems with their own characteristics, they are at a competitive level with many old and newly proposed algorithms today. In order to determine the success of the GWO and KHO algorithms, not only the classical test functions but also two different benchmark test sets are used. These are the CEC-C06 2019 functions and the big data problem, which is a current problem today. The same algorithms are run for both problems and rank values are obtained according to the average results. In CEC-C06 2019 functions, KHO achieved good results, while in big data problems, GWO achieved good results. In this study, the success of the GWO and KHO algorithms are examined in detail in three different experimental sets and it sheds light on researchers who will work with GWO and KHO algorithms. |
7. | A binary enhanced moth flame optimization algorithm for uncapacitated facility location problems Ahmet Özkış, Murat Karakoyun doi: 10.5505/pajes.2023.49576 Pages 737 - 751 Moth Flame Optimization is a nature-inspired meta-heuristic algorithm for constantly solving real-world problems. In this study, a modified version of MFO called binary Enhanced MFO Desert Bush (binEMFO-DB) algorithm is proposed to solve undercapacity facility layout problems. The proposed algorithm includes three modifications: i) chaotic map-based population initiation, ii) random flame selection, and iii) desert bush strategy. The performance of the proposed binEMFO-DB algorithm was tested on 15 different UFL problems from the OR-Library and Taguchi orthogonal array design was used for parameter analysis. The average, gap and hit values of the results obtained with the algorithms were used as performance metrics. The performance of binEMFO-DB is compared to the performance of state-of-the-art algorithms. The results show that the proposed binEMFO-DB has a successful and competitive performance in the test environment. |
8. | Optimization of photocatalytic treatment parameters by response surface method in dye removal with TiO2-ZrO2 catalyst Sefa Furkan Selçuk, Berk Köker, Meltem Sarıoğlu Cebeci doi: 10.5505/pajes.2023.03757 Pages 752 - 759 For the optimization of chemical processes, traditional methods by accepting one parameter as variable and keeping other parameters constant are insufficient. Methods that allow the interaction of parameters by modeling the process, such as the response surface method, provide both financial and temporal advantages. In this study, response surface method and central composite design were used to model and optimize the removal parameters of Maxilon Blue GRL dye by photocatalytic method. The catalyst concentration, the amount of ZrO2 used in the production of the catalyst and the reaction time were selected as removal parameters. As a response parameter, the color removal efficiency was investigated. In the single factor analysis, optimum conditions were determined as 0.775g/L catalyst, 0.4g ZrO2 amount and 45min reaction time. With the analysis of 3D surface and contour graphics, it was evaluated that the interaction between catalyst concentration and ZrO2 amount parameters was low, while the interaction of the reaction time parameter with the ZrO2 amount and catalyst concentration parameters was found to be high. In line with the results of the ANOVA analysis and validation experiment, it has been proven that the predicted values of the model can represent the analysis values. In the optimum option chosen for the validation experiment, the estimated value of the model was 90.154%, while 91% removal efficiency was achieved as a result of the analysis. |
9. | Evaluation of microwave pretreated waste dry fig for biohydrogen production Serpil Özmıhçı, İlknur Hacıoğlu doi: 10.5505/pajes.2023.23238 Pages 760 - 768 In the study, the best conditions for aflatoxin removal and the highest dissolved sugar were determined by applying microwave pretreatment (250-800W) to waste dried figs. Biohydrogen production performances of aflatoxin-free dried figs were evaluated by testing different substrate (11.8-118 g/L) and different organism concentrations (0.5-2.5 g/L). The optimum microwave of the microwave pretreatment process, giving the highest dissolved sugar concentration (89.7 g/L) from 100 g/L waste dried figs was 400 W and the treatment time was 10 minutes. Under these conditions, 46.2% hydrolysis of sugars and 77.8% aflatoxin removal were achieved. At the initial substrate concentration of 59 g/L, the highest cumulative volume of hydrogen gas, hydrogen gas production yield, hydrogen gas production rate were found as 161.9 mL, 207.5 mL H2/g total sugar, 1.75 mL/hour, respectively. Although lactic acid was produced in high amounts from organic acids produced in experiments using different substrate concentrations, it has been observed that it is converted into butyric acid in some experiments. This resulted as an increase in biohydrogen gas production. In different organism amount experiments, it was observed that sugar was consumed in all experiments and hydrogen gas was produced successfully. Butyric acid was the main organic acid in the experimental media where the highest hydrogen gas production yield was produced in 1 g/L as 124.28 mL H2/g total sugar. The specific hydrogen gas production rate (SHPR) decreased with increasing organism concentrations. The highest SHPR was obtained with 0.5 g/L organism concentration as 3.83 mL H2/g biomass hour. |
10. | Mineralogical-Geochemical and gemological investigations garnet porphyroblasts (Lal Stones) in Menderes massif (Hacıaliler/Çine-Aydın) Ufuk Ören, Tamer Koralay doi: 10.5505/pajes.2023.47598 Pages 769 - 782 Ancient period naturalist Plinius referred that almandine composition Lal Stones (Granat) which had been widely used in the Hellenistic and Roman periods, were extracted from Alabanda (Çine) and Orthosia (Yenipazar) in Karia in Anatolia. The garnet samples, which are the subject of this study, are found in the middle-high grade, gneiss, and mica-schist in the Menderes Massif in Hacıaliler (Çine-Aydın) region. Purplish-brown and matt garnet porphyroblasts, varying between 0.5-2 cm in size, have crystallized in dodecahedron form. Garnet minerals, display poikloblastic texture in microscopic examinations, are in a highly fractured structure and contain plenty of quartz, muscovite, and opaque mineral inclusions. In line with a non-destructive analysis technique, Confocal Raman Spectroscopy studies, it has been determined that garnets present a total of 10 different Raman vibrations, 910-912, 349 and 553-555 cm-1 strong, and are typically in almandine composition. According to the results of mineral chemistry, garnets have the chemical formula of Alm0.72-0.87 Grs0.07-0.19 Pyr0.02-0.13 Sps0.00-0.02. In accordance with garnet - biotite geothermometer calculations, it was determined that garnets are formed at an average temperature of 565.3 ± 20.8°C and under the pressure of 6.6 kbar. It was concluded that Hacıaliler garnets demonstrated depletion in terms of LIL elements (Cs, Rb, Ba, K, Sr, Pb) in the average continental crust (MCC) multi-element variation diagram. It was discovered that garnet samples demonstrated enrichment in terms of REE (∑REE: 192.2-212.1), (La/Sm)N ratio 2.62-2.89, (Sm/Yb)N ratio 0.31-0.38 and (Eu) /Eu*)N ratio vary between 0.41-0.44 in REE multi-element variation diagram normalized to chondrite. According to the non-destructive gemological tests, the specific gravity of garnet crystals varies between 3.33 and 3.64, and the refractive index values were found to be around 1.80-1.81. According to the L*a*b* color system, the color average of garnet crystals was determined to be as L*: 46.25 a*: 6.55 b*: 6.60 (purplish brown). As a result of mineralogical, geochemical, and gemological evaluations, it has been concluded that Hacıaliler garnet samples have undergone multi-stage metamorphism and have lost their bright and transparent crystal forms through the subsequent geological processes (weathering, alteration, etc.); therefore, they do not present the characteristics of gemstones. In addition, it is thought that garnet samples may have an important potential in terms of their REE contents. |
11. | Investigation of the effects of filament fineness and disc type on yarn physical and mechanical properties of DTY polyester yarns Gülbin Fidan, Yasemin Korkmaz, Halil İbrahim Çelik doi: 10.5505/pajes.2023.03271 Pages 783 - 789 Polyester fiber has found usage as carpet yarn in different forms such as ATY (Air Textured Yarn), DTY (Drawn Textured Yarn), BCF (Bulked Continuous Filament). Filament fineness is an important parameter which affects the yarn properties. In this context, it is expected that pile yarns produced with different filament finenesses will have different breaking strength, breaking elongation, crimp contraction, and shrinkage properties, so carpets produced from these yarns will have different resilience property. In addition, usage of polyurethane and ceramic discs which have different characteristic, affect DTY yarn properties. In this study, physical properties of DTY polyester pile yarns produced with different filament numbers and different discs were investigated. 1200 denier DTY pile yarns were produced by combining filaments with 3.13, 2.08, 1.56 dpf filament fineness. Yarn production was carried out by false twist texturing method with polyurethane and ceramic discs. DTY pile yarn samples were subjected to breaking strength, breaking elongation, crimp contraction and shrinkage tests. It was observed that filament fineness had no significant effect on breaking strength but had significant effect on breaking elongation, crimp contraction and shrinkage tests. Also, disc type had significant effect on breaking strength, breaking elongation, crimp contraction and it did not have significant effect on shrinkage test. |
12. | A Hazard and Operability (HAZOP) study on the supercritical fluid extraction process Mustafa Serhat Ekinci doi: 10.5505/pajes.2023.47527 Pages 790 - 796 It is known that supercritical fluid extraction has many advantages over traditional extraction methods. This method, which has a wide application area, attracts the attention of many researchers. Due to its advantageous properties, carbon dioxide is generally used as a supercritical fluid in the studies. For this reason, the number of researchers using supercritical carbon dioxide extraction systems is quite high. These systems are known to include serious hazards such as high pressure. In this study, the hazards of a supercritical CO2 system have been evaluated by using Hazard and Operability (HAZOP) study. The implementation of this method has been carried out as follows: The design intention of the supercritical CO2 extraction system has been explained, guide words have been applied to process parameters to obtain meaningful deviations from the design intent, consequences arising from the deviations have been determined considering that all existing safeguards failed, possible causes of deviations have been listed, existing safeguards have been evaluated and actions have been suggested. As a result of the study, it has been understood that excessive pressure can occur for many different reasons and this excessive pressure can lead to serious consequences. To prevent high pressure hazards and other identified hazards from becoming a risk, some precautions have been proposed. Although the results are not dangerous, the causes of operability problems that may be encountered frequently, have been determined and suggestions have been made. This study will contribute to eliminating the deficiency in the literature on safety in the supercritical fluid extraction process, will help those who use similar systems or designers in terms of both safety and operability problems. |