September 2025
Effect of side branch predilatation before provisional stenting in coronary bifurcation lesions
Yücel Kanal 1, Görkem Ayhan 2, Ülkü Nur Koç 1
1 Department of Cardiology, Cumhuriyet University, Sivas, 2 Department of Cardiology, Van Education and Research Hospital, Van, Turkey
DOI: 10.4328/ACAM.22655 Received: 2025-03-16 Accepted: 2025-04-24 Published Online: 2025-05-01 Printed: 2025-09-01 Ann Clin Anal Med 2025;16(9):608-612
Corresponding Author: Yücel Kanal, Department of Cardiology, Cumhuriyet University, Sivas, Turkey. E-mail: yücel_kanal@hotmail.com P: +90 545 814 54 03 Corresponding Author ORCID ID: https://orcid.org/0000-0003-0934-0266
Other Authors ORCID ID: Görkem Ayhan, https://orcid.org/0000-0002-6682-5414 . Ülkü Nur Koç, https://orcid.org/0009-0002-1068-7267
This study was approved by the Ethics Committee of Sivas Cumhuriyet University (Date: 2025-01-16, No: 2025-01/36)
Aim: Percutaneous coronary intervention (PCI) for bifurcation lesions remains technically challenging. Despite the development of single-stent provisional techniques and various dual-stent strategies, the problem of side branch (SB) occlusion and restenosis remains unresolved. In our study, we aimed to compare the side branch (SB) patency at the end of the procedure in patients with bifurcation lesions and significant SB disease, undergoing provisional stenting, with and without SB predilatation.
Materials and Methods: This retrospective observational study included 115 patients who underwent provisional PCI for true bifurcation lesions between January 2021 and November 2024. Patients were divided into two groups: those who received SB predilatation before provisional stenting and those who did not.
Results: The mean age of the 115 patients in our study was 64.6 ± 10.4 years, and 31% were female. In the predilatation group, SB lesion severity, procedure duration, dissection rate, and kissing balloon requirement were significantly higher compared to the non-predilatation group (p < 0.05).
Discussion: Our study showed that routine SB predilatation during provisional stenting of true bifurcation lesions is associated with higher rates of dissection and kissing balloon requirement. Similar to the literature, routine SB predilatation does not appear to be recommended. The use of NC balloons in predilatation may be beneficial, as it may reduce dissection rates. Large-scale studies with a high number of patients will be helpful in further elucidating these findings.
Keywords: percutaneous coronary intervention, bifurcation lesions, coronary artery disease, side branch
Introduction
Coronary artery disease (CAD) is a major cause of mortality and morbidity worldwide [1]. In the invasive treatment of coronary artery disease, percutaneous coronary intervention (PCI) and coronary artery bypass grafting (CABG) surgery are commonly utilized [2]. Although PCI has become the standard treatment for CAD, PCI for bifurcation lesions remains technically challenging [3]. Despite the development of the provisional technique using a single stent and various two-stent strategies, the issue of side branch (SB) restenosis has not been fully resolved [4]. The two-stent approach may be required as the primary option in the most complex bifurcation lesions, in rare cases of difficult SB access, or long SB lesions [5]. In patients with an SB lesion length of <10 mm and an SB diameter of <2 mm, provisional stenting is generally preferred [6]. In such lesions, direct provisional stenting of the main branch may compromise SB flow, making rewiring of the SB challenging. This has led to the consideration of performing balloon predilation of the SB before the procedure. However, according to the latest statements from the European Bifurcation Club, SB predilation is generally not recommended [6]. The primary reason for this is that if dissection occurs in the SB following ballooning, rewiring the SB after provisional stenting of the main branch may become challenging [7]. A study on this topic suggests that SB predilation is recommended, particularly in cases where there is a high risk of SB occlusion after main branch stenting, such as long ostial SB lesions or lesions with severe calcification [8]. In another study comparing the long-term mortality of patients who underwent HD predilatation with those who did not, it was found that the mortality of patients who underwent HD predilatation was higher [8]. However, the type of balloon used in these studies was not specified. As a result, there is insufficient data in the literature to establish a clear consensus on SB predilation. In our study, we aimed to compare the post-procedural SB patency in patients with bifurcation lesions and significant SB disease who underwent provisional stenting, with or without SB predilation.
Materials and Methods
This retrospective observational study reviewed patients who underwent coronary angiography (CAG) between January 2021 and November 2024. Among these patients, those with true bifurcation lesions who underwent provisional stenting were included in the study. Patients who underwent direct two-stent bifurcation stenting, those with an SB diameter <2mm, those with an SB lesion length >10mm, those with isolated SB ostial lesions, patients with a life expectancy of less than one year, those with left main coronary artery stenosis, those with total occlusion in the relevant vessel, those with a left ventricular ejection fraction (LVEF) <30%, those with moderate or severe valvular heart disease, and patients with primary cardiomyopathy, as well as those with active cancer, active autoimmune diseases, active infections, end-stage liver failure, and chronic kidney disease requiring hemodialysis or peritoneal dialysis, were excluded from the study. The remaining 115 patients were selected as the final study cohort (Figure 1). This study was conducted in accordance with the Declaration of Helsinki.
The data to be evaluated will be obtained from the catheter laboratory archives of our hospital’s cardiology department, patient files, and the hospital’s computer record system. Blood samples were collected after an overnight fasting period following the procedure. Complete blood count (CBC) will be performed using the Coulter Counter LH Series by Beckman Coulter Inc., Hialeah, Florida, USA. Biochemical analyses were carried out by Roche Diagnostics, Mannheim, Germany, using a molecular analyzer.
The CAG images were reviewed, and the patients’ SB diameters and the percentage of stenosis in the lesions were examined by two experienced interventional cardiologists. The patients with true bifurcation lesions who underwent provisional stenting were divided into two groups: those who underwent SB predilation before the main branch stenting and those who did not. Semi-compliant (SC) and non-compliant (NC) balloons were used for SB predilation. The study will evaluate whether there is a difference between the two groups in terms of procedural outcomes, including a decrease in flow velocity in the SB after main branch stenting, the development of dissection at the SB ostium, chest pain, electrocardiogram (ECG) changes during the procedure, the need for kissing balloon inflation, the need for SB stenting, and the procedural complications rate, considering all of these factors together. Additionally, demographic and blood parameter differences between the two groups will be assessed. Subgroups based on the type of balloon used will also be evaluated for similar procedural outcomes.
Statistical analysis
Windows SPSS 21.0 (SPSS Inc., Chicago, Illinois, USA) was used for statistical analysis. Continuous variables were described as means ± standard deviation or median (minimum, maximum), and categorical variables were expressed as number (percentage). Evaluations, including the Kolmogorov-Smirnov test, were performed to determine the normality of the distribution. Categorical variables were compared with the Chi-square or Fisher’s exact test, and continuous variables were compared with the Mann-Whitney U test or Student t-test. Two-sided p < 0.05 was considered statistically significant.
Ethical Approval
Our study was approved by the Ethics Committee of Sivas Cumhuriyet University (Date:2025-01-16, No: 2025-01/36).
Results
The mean age of the 115 patients in our study was 64.6 ± 10.4 years, and 31% were female. The demographic and clinical data of the patients are listed in Table 1, with a comparison between the predilation (+) and predilation (-) groups. No significant difference was found between the two groups in terms of demographic and clinical characteristics. When examining angiographic characteristics, the predilation group showed significantly higher lesion severity, procedure duration, dissection rate, and kissing balloon requirement compared to the non-predilation group (p < 0.05) (Table 1). In contrast, the non-predilation group exhibited a significantly higher incidence of ECG changes during the procedure compared to the predilation group (p < 0.05) (Table 1). No significant differences were found between the two groups in terms of procedural complications. Laboratory data are presented in Table 1, with no significant differences between the two groups. When comparing patients who underwent predilation with only an NC balloon to the non-predilation group in terms of angiographic parameters and procedural complications, no significant differences were found between the two groups (Table 2). When comparing patients who underwent predilation with only an SC balloon to those who underwent predilation with only an NC balloon, the rate of contrast-induced nephropathy was found to be significantly higher in the SC balloon group (38.1% vs. 0%; p = 0.011). No significant differences were observed between the two groups in terms of other angiographic outcomes and procedural complications (Table 3).
Discussion
In our study, patients with true bifurcation lesions who underwent provisional PCI were compared between those who received SB ostium predilation and those who did not. In the analysis, including all procedural complications, no significant difference was found between the two groups. However, the procedure duration, dissection rate, and the need for kissing balloon inflation were significantly higher in the predilation group compared to the non-predilation group (p < 0.05). No cardiovascular deaths were observed in either group. In the predilation group, the severity of the SB ostium was found to be significantly higher (%80 vs. 50%, p < 0.001). This suggests that operators tend to perform more predilation as the severity of SB ostial lesions increases. In the recent study published by Carvalho et al., 143 out of 428 patients with true bifurcation lesions underwent SB predilation, and their in-hospital and procedural outcomes were compared. Consistent with our findings, the proportion of SB lesions was significantly higher in patients who underwent predilation compared to the other group. Additionally, in this study, the rate of conversion to a dual-stent strategy was higher in the predilation group. Procedural complications and in-hospital events were similar between the two groups. ( available at: DOI: 10.1002/ccd.31465)
In the treatment of bifurcation lesions, the proximal optimized (POT) provisional stenting technique forms the cornerstone of our current approach to coronary bifurcation interventions [9,10]. In this technique, the potential loss of the SB and the subsequent challenges in rewiring the SB compel operators to seek solutions [6,7,10]. The main reason for this is thought to be the possibility of problems in YD rewiring due to possible dissections [11]. In the studies conducted, a higher rate of SB crossover stenting was observed in patients who underwent SB predilation [12,13]. In line with the results of our study, patients who underwent SB predilation exhibited a higher incidence of dissection, consistent with the literature. As a consequence, flow loss and the need for kissing balloon inflation were significantly higher in the predilation group. In the first study conducted on this topic, the COBIS registry, 837 patients were included, with 175 patients undergoing predilation. In the predilation group, a significantly higher number of patients underwent rescue two-stent placement and kissing balloon inflation. Additionally, in this study, after 21 months of follow-up, the target vessel revascularization (TVR) rate was higher in the predilation group [14]. In our study, similar to the aforementioned study, a higher incidence of SB ostium dissection and the need for kissing balloon inflation was observed. However, when comparing the group that underwent predilation with only an NC balloon to the non-predilation group, no significant difference was found in terms of dissection, kissing balloon need, or overall procedural complications. Additionally, when comparing patients who underwent predilation with an NC balloon to those who received an SC balloon, although the dissection rate was numerically lower in the NC group, no significant difference was found between the two groups. Contrast-induced nephropathy was found to be significantly less common in the NC predilation group compared to the SC predilation group (p < 0.05). A likely reason for this could be that, due to fewer complications in the NC balloon group, the procedures were completed more quickly with less contrast use. Based on these results, it suggests that NC balloon predilation may be associated with improved procedural success and a reduction in SB flow loss. We believe that further studies with larger patient populations could be beneficial. In the study by Vassilev et al., 324 out of 831 patients who underwent bifurcation PCI received predilation, and long-term mortality was followed. While angiographic outcomes were significantly better in the predilation group, the 8-year follow-up revealed a worse patient survival rate [15]. The association of these studies with long-term SB restenosis and TVR suggests that balloon dilation alone does not prevent lesion recoil over an extended period. Therefore, studies involving drug-coated balloons for SB predilation could provide clearer data on this matter. Although our study does not include long-term mortality data, no in-hospital mortality was observed in any of the patients.
Limitation
The study is designed retrospectively. It is not blinded to the operator or the patient. Whether the operators perform balloon predilation or not, and the type of balloon they use, depends on the operator, which makes the study subjective and susceptible to some degree of bias. Additionally, the inclusion of both elective and emergency PCI patients in the study population is an important limitation, as this may affect the results, and it is difficult to correlate these outcomes separately with clinical conditions. The sample size of the study is limited, and it was conducted at a single center. This may restrict the generalizability of the results. In the future, multicenter studies with larger sample sizes would be beneficial to validate the applicability of the results to a broader patient population.
Conclusion
In conclusion, in our study of provisional stenting for true bifurcation lesions, routine YD predilatation was associated with a higher incidence of dissection and the need for kissing balloon. Similar to the literature, it seems more reasonable not to apply routine predilatation. In patients who require predilatation, the use of NC balloons may be beneficial as it could reduce the incidence of dissection. Further studies with a larger patient sample would be valuable in this regard.
Scientific Responsibility Statement
The authors declare that they are responsible for the article’s scientific content including study design, data collection, analysis and interpretation, writing, some of the main line, or all of the preparation and scientific review of the contents and approval of the final version of the article.
Animal and Human Rights Statement
All procedures performed in this study were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki Declaration and its later amendments or compareable ethical standards.
Funding: None
Conflict of Interest
The authors declare that there is no conflict of interest.
References
1. Rosamond W, Flegal K, Furie K, et al. Heart disease and stroke statistics–2008 update: a report from the American Heart Association Statistics Committee and Stroke Statistics Subcommittee. Circulation. 2008;117(4):25-146.
2. Alfonso F and Pan M. Do we know how to treat bifurcation coronary lesions? Revista Española de Cardiología (English Edition). 2014;67(10):787-872.
3. Pan M, Suárez LJ, Medina A, et al. Simple and complex stent strategies for bifurcated coronary arterial stenosis involving the side branch origin. Am J Cardiol. 1999;83(9):1320-5.
4. Sharma SK, Mareş AM, and Kini AS. Coronary bifurcation lesions. Minerva Cardioangiol. 2009;57(5):667-82.
5. Pan M and Gwon HC. The story of side branch predilatation before provisional stenting. EuroIntervention. 2015;11(5):78-80.
6. Stankovic G, Lefèvre T, Chieffo A, et al. Consensus from the 7th European Bifurcation Club meeting. EuroIntervention. 2013;9(1):36-45.
7. Albiero R, Burzotta F, Lassen JF, et al. Treatment of coronary bifurcation lesions, part I: Implanting the first stent in the provisional pathway. The 16th expert consensus document of the European Bifurcation Club. EuroIntervention. 2022;18(5):362-76.
8. Darremont O, Leymarie JL, Lefèvre T. Technical aspects of the provisional side branch stenting strategy. EuroIntervention. 2015;11(5):86-90.
9. Volet C, Puricel S, Cook ST, et al. Proximal optimization technique and percutaneous coronary intervention for left main disease: POTENTIAL-LM. Catheterization and cardiovascular interventions. 2024;103(3):417-24.
10. Burzotta F, Lassen JF, Lefevre T, et al. Percutaneous coronary intervention for bifurcation coronary lesions: the 15(th) consensus document from the European Bifurcation Club. EuroIntervention. 2021;16(16):1307-17.
11. Sgueglia GA, Todaro D, and Pucci E. Complexity and simplicity in percutaneous bifurcation interventions. EuroIntervention. 2010;6(5):664-5.
12. Colombo A, Bramucci E, Saccà S, et al. Randomized study of the crush technique versus provisional side-branch stenting in true coronary bifurcations: the CACTUS (Coronary Bifurcations: Application of the Crushing Technique Using Sirolimus-Eluting Stents) Study. Circulation. 2009;119(1):71-8.
13. Hildick-Smith D, de Belder AJ, Cooter N, et al. Randomized trial of simple versus complex drug-eluting stenting for bifurcation lesions: The British Bifurcation Coronary Study: old, new, and evolving strategies. Circulation. 2010;121(10):1235-43.
14. Song PS, Song YB, Yang JH, et al. The impact of side branch predilatation on procedural and long-term clinical outcomes in coronary bifurcation lesions treated by the provisional approach. Revista Española de Cardiología (English Edition). 2014;67(10):787-872.
15. Vassilev D, Mileva N, Panayotov P, et al. Side branch predilatation during percutaneous coronary bifurcation intervention: Long-term mortality analysis. Kardiol Pol. 2024;82(4):398-406.
Download attachments: 10.4328.ACAM.22655
Yücel Kanal, Görkem Ayhan, Ülkü Nur Koç. Effect of side branch predilatation before provisional stenting in coronary bifurcation lesions. Ann Clin Anal Med 2025;16(9):608-612
Citations in Google Scholar: Google Scholar
This work is licensed under a Creative Commons Attribution 4.0 International License. To view a copy of the license, visit https://creativecommons.org/licenses/by-nc/4.0/
The impact of anisometropic amblyopia on children’s quality of life and mental health: A case-control study
Fatma Sumer 1, Merve Yazici 2, Seher Sumer 3
1 Department of Ophthalmology, Faculty of Medicine, Recep Tayyip Erdogan University, Rize, 2 Department of Child and Adolescent Mental Health and Diseases, Faculty of Medicine, Recep Tayyip Erdogan University, Rize, 3 Department of Family Medicine, Istanbul Training and Research Hospital, Istanbul, Turkiye
DOI: 10.4328/ACAM.22762 Received: 2025-05-31 Accepted: 2025-07-03 Published Online: 2025-08-19 Printed: 2025-09-01 Ann Clin Anal Med 2025;16(9):613-616
Corresponding Author: Fatma Sumer, Department of Ophthalmology, Faculty of Medicine, Recep Tayyip Erdogan University, Rize, Turkiye. E-mail: drfatmasumer@gmail.com P: +90 530 904 39 73 Corresponding Author ORCID ID: https://orcid.org/0000-0002-4146-8190
Other Authors ORCID ID: Merve Yazici, https://orcid.org/0000-0001-8217-0043 . Seher Sumer, https://orcid.org/0009-0005-0430-3054
This study was approved by the Ethics Committee of Recep Tayyip Erdogan University School of Medicine (Date: 2023-11-23, No: 261).
Aim: We aimed to compare children with anisometropic amblyopia with their healthy peers in terms of quality of life and mental disorders and to identify possible psychosocial difficulties in children with amblyopia.
Materials and Methods: This prospective case-control study included 95 children aged 5-12 years with unilateral anisometropic amblyopia between December 2023 and December 2024. Sociodemographic data form, Pediatric Quality of Life Inventory (PedsQL), and Strengths and Difficulties Questionnaire (SDQ) were applied to participants.
Results: Mean age was 8.31 years in the patient group and 8.55 years in controls (p=0.058). The amblyopia group showed significantly lower scores in PedsQL total score, physical health, and psychosocial health subscales, as well as higher scores in the SDQ peer problems subscale compared to controls.
Discussion: Children with amblyopia demonstrated impaired quality of life in both physical and psychosocial domains and experienced greater peer relationship difficulties. Ophthalmologists should be aware of these psychosocial impacts for early detection and intervention.
Keywords: amblyopia, quality of life, behavioral problems, emotional problems, peer relations
Introduction
Amblyopia affects approximately 1-4% of children and represents the leading monocular cause of visual impairment in individuals under 40 years [1]. This unilateral or bilateral visual impairment that does not improve with correction can be effectively prevented or corrected with timely interventions [2].
Vision encompasses multiple parameters, including shape recognition, motion perception, stereopsis, and contrast sensitivity [3]. Even the fellow eye in amblyopes shows inferior performance compared to normal subjects on psychophysical tests [4]. Normal binocular development requires healthy competition between eyes, and in amblyopia, the better eye’s development is delayed due to a lack of competition [5].
Research demonstrates that children with amblyopia have lower self-perception and social acceptance compared to healthy peers. Birch et al. reported that amblyopic children feel less competent academically, socially, and athletically, leading to inadequacy and isolation [6]. Studies show that untreated amblyopia leads to poorer health-related quality of life, with children experiencing emotional and behavioral difficulties [7]. Treatment-related psychological effects can contribute to increased stress for children and parents [8].
We aimed to compare children with anisometropic amblyopia with healthy peers regarding quality of life and mental health, identifying possible psychosocial difficulties in this population.
Materials and Methods
This prospective case-control study was conducted at Recep Tayyip Erdogan University Training and Research Hospital following institutional ethics board approval and in accordance with the Declaration of Helsinki. Written informed consent was obtained from all participants.
The study population consisted of children aged 5-12 years diagnosed with unilateral anisometropic amblyopia between December 2023 and December 2024. Anisometropic amblyopia was defined as: (1) ≥2 lines difference in best-corrected visual acuity between eyes, (2) anisometropia ≥1.0 D spherical or cylindrical difference, (3) absence of strabismus, and (4) no significant macular pathology or media opacity.
Visual acuities were recorded using Snellen charts. Amblyopia severity was classified as mild (6/9 to 6/12), moderate (6/18 to 6/36), or severe (<6/60). Anterior segments were evaluated by slit lamp (Topcon SL-3C), and intraocular pressure measured by pneumotonometry (Topcon Computerized Tonometer). Cover-uncover and alternate prism cover tests excluded strabismus. Cyclopentolate hydrochloride 1% was instilled, and after 45 minutes, skiascopy and autorefractometry (Topcon Auto Kerato-refractometer) determined refractive errors. Fundoscopic examination was performed with direct ophthalmoscopy and a +90D contact lens.
Exclusion criteria included: other amblyopia types, visual acuity <0.1 in the amblyopic eye, high refractive errors, organic eye diseases, systemic diseases, psychiatric diagnosis, intraocular pressure >21 mmHg, nystagmus, previous amblyopia treatment, and strabismus. Controls were children aged 5-12 years with perfect uncorrected vision, no refractive error, and no chronic diseases.
Assessment Tools
Pediatric Quality of Life Inventory (PedsQL): A standardized 23-item instrument evaluating health-related quality of life in pediatric populations, validated in Turkish [9,10]. It covers physical health (8 items), emotional well-being (5 items), social interactions (5 items), and academic functioning (5 items), using a five-point Likert scale. Higher scores indicate better quality of life.
Strengths and Difficulties Questionnaire (SDQ): A 25-item behavioral screening tool assessing conduct problems, hyperactivity/inattention, emotional symptoms, peer relationship problems, and prosocial behavior. The Turkish adaptation demonstrates acceptable validity and reliability [11].
Statistical Methods
Data analysis was conducted using IBM SPSS Statistics Version 26.0. Normality was assessed using Kolmogorov-Smirnov and Shapiro-Wilk tests. An independent samples t-test was used for parametric comparisons, Mann-Whitney U test for non-parametric comparisons. Categorical variables were analyzed using the Pearson chi-square test, with Fisher’s exact test when appropriate. Post-hoc comparisons used Bonferroni correction. P-values <0.05 were considered significant.
Ethical Approval
This study was approved by the Ethics Committee of Recep Tayyip Erdogan University School of Medicine (Date: 2023-11-23, No: 261).
Results
The study included 95 participants (55 female, 40 male) in the patient group and 93 participants (50 female, 43 male) in the control group (p=0.568). The mean age was 8.31 years in patients and 8.55 years in controls (p=0.058).
Significant differences existed in maternal (p=0.021) and paternal (p=0.009) education levels between groups, with lower education levels in the amblyopia group. No other demographic differences were observed (Table 1).
Scale score comparisons revealed significant differences in the physical health domain (p=0.002), with markedly lower scores in the patient group. Psychosocial health scores were also significantly lower in the amblyopia group (p<0.001). SDQ peer relationship scores were significantly higher in the patient group (median=3) compared to controls (median=2) (p<0.001), indicating greater peer difficulties. No significant differences were found in other SDQ subscales (Table 2).
Discussion
This study represents one of the few investigations focusing specifically on amblyopia diagnosis impact on children’s quality of life and psychological well-being, distinct from treatment-related effects.
Our results demonstrated that amblyopic children exhibited significantly diminished health-related quality of life in physical health, psychosocial functioning, and overall scores compared to healthy controls. Additionally, peer relationship problems were significantly more pronounced, suggesting amblyopia specifically impairs social integration while not uniformly affecting all emotional and behavioral domains.
The physical health deficits likely reflect the motor-visual integration difficulties inherent in amblyopia. Reduced depth perception and impaired binocular vision affect hand-eye coordination and spatial navigation, impacting daily activities and sports participation [9]. The psychosocial impairments encompassing emotional, social, and academic functioning suggest broader developmental consequences beyond visual deficits.
Our findings align with previous research showing quality of life impacts in amblyopic children [12, 13]. However, unlike studies examining treatment-related psychosocial effects, our research isolated the diagnosis-specific impacts by excluding children undergoing treatment. This approach eliminates confounding factors such as patching stigma or treatment burden [14, 15].
The peer relationship difficulties observed merit particular attention. Unlike strabismus, which has visible cosmetic effects potentially affecting social interactions [16], anisometropic amblyopia lacks obvious external signs. The peer problems likely stem from functional visual deficits affecting social activities, sports participation, and academic performance rather than appearance-related stigma [6].
The lower parental education levels in the amblyopia group highlight socioeconomic disparities affecting amblyopia prevalence and management. Lower socioeconomic status is associated with delayed diagnosis, reduced treatment access, and poorer adherence [17-19]. This finding suggests amblyopia disproportionately affects disadvantaged populations, potentially exacerbating existing health inequalities.
Our study’s strength lies in focusing exclusively on anisometropic amblyopia, creating a homogeneous population and eliminating the confounding effects of strabismus or treatment-related factors. The exclusion of children with mental disorders and chronic illnesses from both groups enhanced comparability and reduced confounding variables.
Limitation
Several limitations should be acknowledged. First, this single-center study with a limited sample size may restrict generalizability. Multi-center studies with larger populations would enhance applicability. Second, focusing specifically on anisometropic amblyopia, while creating homogeneity, limits generalizability to other amblyopia subtypes. Third, we used a general health-related quality of life scale rather than vision-specific instruments, as validated Turkish versions of pediatric vision-specific quality of life questionnaires were unavailable. Finally, the cross-sectional design prevents causal inferences about the relationship between amblyopia and quality of life outcomes.
Conclusion
Children with anisometropic amblyopia demonstrate significantly impaired quality of life in physical and psychosocial domains and experience greater peer relationship difficulties compared to healthy peers. These findings highlight the importance of ophthalmologists recognizing potential psychosocial impacts alongside visual deficits. Early identification and appropriate interventions addressing both visual and psychosocial aspects may be protective against negative developmental outcomes. Given amblyopia’s positive response to early treatment, comprehensive management should consider quality of life impacts to optimize outcomes and prevent long-term psychosocial consequences.
Scientific Responsibility Statement
The authors declare that they are responsible for the article’s scientific content including study design, data collection, analysis and interpretation, writing, some of the main line, or all of the preparation and scientific review of the contents and approval of the final version of the article.
Animal and Human Rights Statement
All procedures performed in this study were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki Declaration and its later amendments or compareable ethical standards.
Funding: None
Conflict of Interest
The authors declare that there is no conflict of interest.
References
1. Gong L, Reynaud A, Hess RF. The suppressive basis of ocular dominance changes induced by short-term monocular deprivation in normal and amblyopic adults. Invest Ophthalmol Vis Sci. 2023;64(13):1-12.
2. Hess RF. Reasons why we might want to question the use of patching to treat amblyopia as well as the reliance on visual acuity as the primary outcome measure. BMJ Open Ophthalmol. 2022;7(1):e000872.
3. Gantz L, Stiebel-Kalish H. Convergence insufficiency: Review of clinical diagnostic signs. J Optom. 2022;15(4):256-70.
4. Dutta P. Characteristics of binocular vision and oculomotor function among sports-concussed athletes. Indian J Ophthalmol. 2023;71(5):2076-82.
5. Johansson J, Pansell T, Ygge J. Monocular and binocular reading performance in subjects with normal binocular vision. Clin Exp Optom. 2014;97(4):341-8.
6. Birch EE, Castañeda YS, Cheng-Patel CS, et al. Self-perception of school-aged children with amblyopia and its association with reading speed and motor skills. JAMA Ophthalmol. 2019;137(2):167-74.
7. Syntosi A, Felizzi F, Bouchet C. A social media listening study to understand the unmet needs and quality of life in adult and pediatric amblyopia patients. Ophthalmol Ther. 2022;11(6):2183-96.
8. Randhawa S, Griffiths N, O’Brien P, et al. Qualitative exploration of the visual function impairments and health-related quality of life impacts of amblyopia in adult and pediatric populations. Ophthalmol Ther. 2023;12(5):2505-28.
9. Birch EE, Kelly KR. Amblyopia and the whole child. Prog Retin Eye Res. 2023;93:101117.
10. Kumar SGP, Ranpise D, Vishwakarma P. Social-emotional issues among children with strabismus higher than among non-strabismus children in Western India. Indian J Ophthalmol. 2023;71(7):2827-34.
11. Webber AL, Wood JM, Gole GA. Effect of amblyopia on self-esteem in children. Optom Vis Sci. 2008;85(11):1074-81.
12. Hatt SR, Leske DA, Castañeda YS, et al. Understanding the impact of residual amblyopia on functional vision and eye-related quality of life using the PedEyeQ. Am J Ophthalmol. 2020;218:173-81.
13. Bogdănici ST, Costin D, Bogdănici CM. Quality of life for amblyopic children and their parents. Rev Med Chir Soc Med Nat Iasi. 2015;119(1):214-20.
14. Chen Y, Chen X, Chen J. Longitudinal impact on quality of life for school-aged children with amblyopia treatment: perspective from children. Curr Eye Res. 2016;41(2):208-14.
15. Wen G, McKean-Cowdin R, Varma R, et al. General health-related quality of life in preschool children with strabismus or amblyopia. Ophthalmology. 2011;118(3):574-80.
16. Webber AL, Wood J. Amblyopia: prevalence, natural history, functional effects and treatment. Clin Exp Optom. 2005;88(6):365-75.
17. Sharma A, Wong AMF, Colpa L. Socioeconomic status and utilization of amblyopia services at a tertiary pediatric hospital in Canada. Can J Ophthalmol. 2016;51(6):452-8.
18. Zhang XJ, Wong PPY, Wong ES, et al. Delayed diagnosis of amblyopia in children of lower socioeconomic families: the Hong Kong children eye study. Ophthalmic Epidemiol. 2022;29(6):621-8.
19. Nitzan I, Bez M, Megreli J, et al. Socio-demographic disparities in amblyopia prevalence among 1.5 million adolescents. Eur J Public Health. 2021;31(6):1211-7.
Download attachments: 10.4328.ACAM.22762
Fatma Sumer, Merve Yazici, Seher Sumer. The impact of anisometropic amblyopia on children’s quality of life and mental health: A case-control study. Ann Clin Anal Med 2025;16(9):613-616
Citations in Google Scholar: Google Scholar
This work is licensed under a Creative Commons Attribution 4.0 International License. To view a copy of the license, visit https://creativecommons.org/licenses/by-nc/4.0/
Sleep quality in stroke patients receiving botulinum toxin treatment for spasticity
Sıdıka Büyükvural Şen 1, Betül Yavuz Keleş 2
1 Department of Physical Therapy and Rehabilitation, Faculty of Medicine, University of Health Sciences, Adana City Training and Research Hospital, Adana, Turkey, 2 Department of Physical Therapy and Rehabilitation, Malarsjukhuset Rehabilitering Medicin, Eskilstuna, Sweden
DOI: 10.4328/ACAM.22766 Received: 2025-06-05 Accepted: 2025-07-08 Published Online: 2025-07-18 Printed: 2025-09-01 Ann Clin Anal Med 2025;16(9):617-622
Corresponding Author: Sıdıka Büyükvural Şen, Department of Physical Therapy and Rehabilitation, Faculty of Medicine, University of Health Sciences, Adana City Training and Research Hospital, Adana, Turkey. E-mail: sbuyukvuralsen@gmail.com P: +90 506 532 88 06 Corresponding Author ORCID ID: https://orcid.org/0000-0003-1084-4226
Other Authors ORCID ID: Betül Yavuz Keleş, https://orcid.org/0000-0003-3370-3696
This study was approved by the Ethics Committee of the Adana City Training and Research Hospital Clinical Research Ethics Committee (Date: 2021-07-14, No:85-KAEK-1490).
Aim: Botulinum toxin (BoNT) is widely used in the treatment of stroke-related spasticity. Sleep disturbances are often observed as a comorbidity or complication in stroke patients. In light of the data linking spasticity with sleep disorders, this study aimed to evaluate whether spasticity treatment with BoNT improved sleep quality, quality of life, and anxiety levels in stroke patients.
Materials and Methods: This observational, cross-sectional study included 38 hemiplegic patients with focal spasticity who were scheduled for BoNT injections following a stroke. Assessments were undertaken before BoNT injections and at the first and third months after treatment. Clinical evaluations included the Modified Ashworth Scale (MAS) for spasticity, the Visual Analog Scale (VAS) for pain and spasticity severity, Brunnstrom staging for hand, upper, and lower extremities, the Pittsburgh Sleep Quality Index (PSQI) for sleep quality, the Hospital Anxiety and Depression Scale (HADS-A and HADS-D) for anxiety and depression levels, and the EuroQol-5 Dimensions-5 Levels (EQ-5D-5L) for quality of life.
Results: Post-treatment assessments revealed a statistically significant reduction in Brunnstrom staging (hand, upper, and lower extremities) at both the first and third months compared to pre-treatment (p<0.001). Similarly, MAS, VAS, HADS-A, HADS-D, PSQI, and EQ-5D-5L scores showed significant improvements at the first and third months after treatment (p<0.001).
Discussion: The results of this study demonstrated that BoNT, frequently used in the treatment of stroke-related spasticity, had a positive effect on sleep quality in stroke patients.
Keywords: botulinum toxin, pittsburgh sleep quality index, sleep quality, stroke
Introduction
Stroke is one of the leading causes of death and disability across the world. As survival rates improve, the importance of addressing post-stroke disability has become more evident [1]. One of the major contributors to post-stroke disability is spasticity, which can lead to functional impairment, pain, and deterioration in sleep and quality of life. Spasticity is defined as a sensory-motor disorder resulting from upper motor neuron lesions, characterized by intermittent or sustained involuntary muscle activation [2]. The prevalence of post-stroke spasticity has been reported to vary between 4% and 42.6% in various studies [1].
Although spasticity may contribute to standing and walking or help prevent osteoporosis and deep vein thrombosis, it is known to impair transfers, ambulation, sleep quality, and daily living activities. Treatment options for spasticity include various physical therapy modalities, pharmacological interventions, local injections (including Botulinum toxin (BoNT), and surgical treatments [3].
BoNT is a neurotoxin derived from the bacterium Clostridium botulinum, which binds presynaptically to the acetylcholine receptor at the neuromuscular junction and inhibits the release of acetylcholine, thereby blocking transmission [4]. It is effective in reducing spasticity that causes temporary, focal muscle weakness lasting three to six months [4].
BoNT is widely used in the treatment of spasticity due to its efficacy, ease of application, low side effect profile, and reversible effects. It is recommended as the first-line treatment for focal spasticity following a stroke [5]. It has been shown that the suppression of spasticity through BoNT injections leads to a reduction in functional limitations [6]. Several studies have demonstrated that the presence of spasticity after a stroke can affect sleep [2, 6]. While it is well known that BoNT treatment reduces spasticity in patients with post-stroke spasticity, there are only a few publications investigating its effects on sleep quality in these patients [7]. The primary aim of this study was to evaluate how sleep quality was affected in patients receiving botulinum toxin injection for spasticity after stroke. The secondary objective of this study is to investigate the association between sleep quality following BoNT injection and relevant demographic and clinical variables.
Materials and Methods
This study was designed as a single-center, observational, cross-sectional study. For this study, patients who received BoNT injection therapy due to post-stroke spasticity between August 2021 och March 2022 at our clinic were evaluated. A minimum grade 1 spasticity in the target muscle, where BoNT was applied, was defined as an inclusion criterion. Patients who did not meet the exclusion criteria listed below and who received the same total dose of BoNT injection (300 IU) were included in the study.
Exclusion criteria
– History of BoNT treatment,
– Any neurological disorders other than stroke,
– Uncontrolled diabetes, uncontrolled hypertension, and uncontrolled cardiovascular disease,
– Pre-existing sleep disorders
– Malignancy
– Pregnancy
– Presence of allergy or sensitivity to BoNT
– Fixed contractures i targeted muscle
In our clinic, BoNT injections were diluted with 2 mL of 0.9% NaCl and administered at a total maximum dose of 300 IU. The injections were performed by the same doctor, under ultrasound guidance, targeting the planned muscles in the upper and lower extremities.
All patients’ demographic characteristics (age, gender, disease duration and etiology, and educational level) were assessed. Clinical evaluations were performed by the same researcher on the day of injection and at the first and third months after injection. Patients’ upper extremities, hands, and lower extremities were evaluated functionally with Brunnstrom staging. The Brunnstrom staging system was used to evaluate the neurophysiological development stages of the patients [8]. According to this staging, patients are assessed across six stages based on the development of spasticity and synergy patterns. The lowest stage is Stage I (where voluntary movement is absent), and the highest stage is Stage VI (where isolated joint movement occurs). A higher stage indicates a better level of motor recovery in the patient. The upper extremity, lower extremity, and hands were evaluated separately.
The grade of spasticity in the muscles treated with BoNT was evaluated using the Modified Ashworth Scale (MAS) by the same physician. The spasticity grade was defined for the elbow (MAS-E), wrist (MAS-W), hand fingers (MAS-F), and ankle (MAS-A) flexor muscles. The MAS is one of the most commonly used clinical scales for assessing spasticity. The MAS was modified by Bohannan et al. in 1987 by adding +1 to the classic Ashworth scale, and its reliability has been established by practitioners [9]. This assessment consists of a scoring system with six levels (0 = no increase in muscle tone and 4 = affected part is rigid in flexion or extension). A weakness of the scale is that it is subjective and nominal. It is frequently used due to its ease of application, good tolerability by patients, lack of equipment requirement, and low cost.
Spasticity severity was also assessed by the patient using a Visual Analog Scale (VAS) for the elbow (VAS-E), wrist (VAS-W), fingers (VAS-F), and ankle (VAS-A). Both MAS and VAS spasticity assessments were conducted on the muscles targeted by the BoNT injections.
The presence and location of pain (upper/lower extremity, plegic/non-plegic side) were recorded, and the severity of pain was evaluated by using VAS. The VAS is the most widely used and easy-to-use scale for assessing pain. The patients were asked to rate their pain on a horizontal 10-cm line, scoring from 0 (no pain) to 10 (worst pain imaginable) [10].
Sleep quality was assessed using the Pittsburgh Sleep Quality Index (PSQI). The PSQI is a standardized sleep quality assessment scale developed by Buysse et al. in 1989. Validity and reliability studies have been conducted [11]. The PSQI consists of 19 items under seven subscales (subjective sleep quality, sleep latency, sleep duration, habitual sleep efficiency, sleep disturbances, use of sleep medication, and daytime dysfunction) rated on a scale of 0 to 3. The sum of all subscales is assessed with a total sleep quality score ranging from 0 to 21, with higher scores indicating lower sleep quality. A total score of five or higher is considered indicative of poor sleep quality [11].
Quality of life was measured using the EuroQol-5 Dimensions-5 Levels (EQ-5D-5L). The EQ-5D-5L consists of five items, each covering a specific topic. The EQ-5D-5L is a brief, multi-attribute, generic health status measure that includes Likert response options (descriptive system) and a visual analog scale (EQ-VAS) consisting of five questions. The descriptive system covers five dimensions of health (mobility, self-care, usual activities, pain/discomfort, and anxiety/depression), with five severity levels in each dimension (no problems, slight problems, moderate problems, severe problems, and extreme problems or inability). The second part (EQ-VAS) of the scale asks patients to rate their health on a scale from 0 (worst imaginable health) to 100 (best imaginable health). In this study, the Turkish version of the scale, which has been translated into 171 languages by the EuroQol group, was used [12].
The anxiety and depression levels were evaluated with the Hospital Anxiety and Depression Scale (HADS). The HADS was developed by Zigmond and Snaith, and the validity and reliability of the Turkish version were examined by Aydemir et al. [13]. The scale consists of 14 items and comprises two factors. Seven odd-numbered items measure anxiety, while seven even-numbered items measure depression. Scoring varies for each item: items 1, 3, 5, 6, 8, 10, 11, and 13 are scored as 3, 2, 1, and 0, respectively, while items 2, 4, 7, 9, 12, and 14 are scored as 0, 1, 2, and 3, respectively. In the evaluation of the scale, the cut-off score is 10 for the anxiety subscale and 7 for the depression subscale. Respondents with scores above these thresholds are considered at risk.
This study was conducted in accordance with the principles of the Declaration of Helsinki. Written informed consent was obtained from the patients, and detailed information about the study was included in the consent form.
Statistical Analysis
Continuous variables were expressed as mean ± standard deviation values, while categorical data were expressed as frequencies and percentages. The normality of continuous variables was assessed using the Kolmogorov-Smirnov goodness-of-fit test. Comparisons between two groups were undertaken using Student’s t-test if the data followed a normal distribution and the Mann-Whitney U test otherwise. Pre- and post-treatment comparisons were conducted using the Friedman Test (post hoc: Wilcoxon signed-rank test). For comparing categorical data, the chi-square test and/or Fisher’s exact test were employed. The linear relationship between scales was tested using Spearman’s rho correlation analysis. Analyses were performed using IBM SPSS version 26.0 (IBM Corporation, Armonk, NY, USA). The statistical significance level was accepted as p < 0.05.
Ethical Approval
This study was approved by the Ethics Committee of the Adana City Training and Research Hospital Clinical Research Ethics Committee (Date: 2021-07-14, No:85-KAEK-1490).
Results
The study included a total of 38 patients, of whom 60.5% were male and 39.5% were female. The mean age of the patients was 54.84 ± 13.69 years. The demographic characteristics of the patients are presented in Table 1.
It was determined that Brunnstrom staging scores (hand, upper extremity, and lower extremity) showed statistically significant improvement at the first- and third-month follow-ups compared to pre-treatment values. In addition, MAS (E, W, F, and A), VAS-spasticity (E, W, F, and A), HADS-A, HADS-D, PSQI, and EQ-5D-5L scores were found to have statistically significant reductions at both the first- and third-month follow-ups compared to pre-treatment (p < 0.001). In contrast, EQ-VAS scores showed a statistically significant increase at the first and third months compared to pre-treatment (p<0.001). However, no statistically significant differences were found between the first and third post-treatment measurements for Brunnstrom scores (hand, upper extremity, and lower extremity), MAS (E, W, F, and A), HADS-A, HADS-D, or PSQI scores (p > 0.05). Nevertheless, VAS-spasticity (E, W, F, and A) scores were found to be significantly higher at the third month compared to the first month after treatment (Table 2, Figures 1, 2).
Table 3 presents the first-month evaluation of patients with good and poor sleep quality after BoNT treatment according to their demographic and clinical characteristics.
No significant correlations were found between PSQI scores and other scale scores at the first-month follow-up after BoNT treatment (p > 0.05). However, at the third month following treatment, a positive, moderate, and statistically significant correlation was identified between PSQI and HADS-D scores.
Discussion
This study provides insight into the changes in spasticity and sleep quality at multiple time points following BoNT injections in stroke patients. According to our findings, BoNT injections for the treatment of spasticity in stroke patients have the potential to improve sleep quality. In addition, a reduction in extremity pain, improvements in anxiety-depression scores, and enhanced quality of life were observed. The improvements in sleep quality may be related to the reduction in spasticity and pain, as well as improvements in depressive symptoms.
Several studies have explored the coexistence of stroke and sleep disorders, but there are only limited studies on sleep quality in stroke patients without a diagnosed sleep disorder [14]. In the current study, we evaluated sleep quality in stroke patients without a sleep disorder diagnosis before and after BoNT injections.
Previous research has shown that spasticity following a stroke can affect sleep [2, 6]. While it is well known that BoNT treatment reduces spasticity in stroke patients, only a few studies have examined the effect of this treatment on sleep quality. One of these studies showed similar results to ours, indicating that BoNT frequently used for treating spasticity, also has a positive impact on sleep quality [7]. Furthermore, a study on children with cerebral palsy (CP) investigated the effect of BoNT on sleep problems, concluding that BoNT injections aimed at reducing spasticity might have the potential to improve sleep quality in patients with CP [15]. In our study, we also observed a statistically significant reduction in PSQI scores following BoNT treatment, indicating an improvement in sleep quality.
In the present study, MAS at both the first and third months post-injection showed a statistically significant reduction compared to baseline. However, there is no statistically significant difference between the first and third month MAS. While the physician-rated MAS scores remained stable, the patient-reported VAS-Spasticity scores revealed a significant increase at the third month compared to the first. This divergence may point to a gradual decline in the perceived therapeutic effect of BoNT over time.
Spasticity can be a direct or indirect cause of pain [16]. Some studies involving BoNT treatment have also included pain as an outcome measure. In most studies evaluating pain, a reduction in pain was observed parallel to the reduction in spasticity [17]; there is also research reporting no change in pain despite reduced spasticity [18]. In the current study, both spasticity and pain significantly decreased following BoNT treatment. However, we found no correlation between spasticity, pain, and PSQI scores after treatment.
BoNT also possesses analgesic effects resulting from its reduction of muscle hyperactivity and therefore has been widely investigated in various pain-related conditions, including myofascial pain syndrome, headache, arthritis, and neuropathic pain [19]. Findings from these studies suggest that BoNT injections may alleviate pain [20], although this remains a subject of ongoing debate [16]. In this study, we hypothesized that BoNT could contribute to improved sleep quality, not only through its antispastic effects but also through its potential analgesic effects.
Previous research has shown an association between poor sleep quality and depression in stroke patients [21], while there are also researchers observing a higher incidence of anxiety among stroke patients with poor sleep quality [22]. In our study, the correlation between depressive symptoms and sleep quality, which was not significant in the first month, became statistically significant in the third month. This significant correlation may raise the question related to the fact that patients’ increased perception of spasticity at the 3rd month compared to the 1st month, and this may negatively affect patients’ psychological well-being and sleep patterns.
The discussion surrounding the effect of BoNT on functional improvement has been ongoing for many years. Numerous meta-analyses and systematic reviews have reported that BoNT reduces spasticity [23], and this reduction has been associated with improvements in quality of life [24, 25]. In our study, BoNT significantly reduced spasticity, and we observed a statistically significant increase in Brunnstrom staging and EQ-5D-5L scores, indicating functional improvement.
Limitations
The primary limitation of our study is the small sample size. Another limitation is that sleep quality was not evaluated using an objective parameter, such as polysomnography. This study may be considered a preliminary investigation into the effects of BoNT on sleep quality in stroke patients with spasticity, as there is limited research in the literature on this topic in adult patients. We consider that our study will contribute to the literature in this regard.
Conclusion
In conclusion, BoNT, which has been successfully used in spasticity treatment for many years, has a broad range of applications in various conditions involving excessive muscle activity. In addition, a substantial body of research has explored its use in pain-related conditions. Our study revealed the positive effects of BoNT on sleep quality; however, larger prospective randomized controlled trials are necessary to elucidate the underlying mechanisms of this effect.
Scientific Responsibility Statement
The authors declare that they are responsible for the article’s scientific content including study design, data collection, analysis and interpretation, writing, some of the main line, or all of the preparation and scientific review of the contents and approval of the final version of the article.
Animal and Human Rights Statement
All procedures performed in this study were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki Declaration and its later amendments or compareable ethical standards.
Funding: None
Conflict of Interest
The authors declare that there is no conflict of interest.
References
1. O’Dell MW, Lin C-C D, Harrison V. Stroke rehabilitation: strategies to enhance motor recovery. Annu Rev Med. 2009;60:55-68.
2. Pandyan AD, Gregoric M, Barnes MP, et al. Spasticity: clinical perceptions, neurological realities and meaningful measurement. Disabil Rehabil. 2005;27(1-2):2-6.
3. Cabanas-Valdés R, Serra-Llobet P, Rodriguez-Rubio PR, et al. The effectiveness of extracorporeal shock wave therapy for improving upper limb spasticity and functionality in stroke patients: a systematic review and meta-analysis. Clin Rehabil. 2020;34(9):1141-56.
4. Tater P, Pandey S. Botulinum toxin in movement disorders. Neurol India. 2018;66(Supplement):S79-S89.
5. Wissel J, Ward AB, Erztgaard P, et al. European consensus table on the use of botulinum toxin type A in adult spasticity. J Rehabil Med. 2009;41(1):13-25.
6. Milinis K, Young CA. Trajectories of Outcome in Neurological Conditions (TONiC) study. Systematic review of the influence of spasticity on quality of life in adults with chronic neurological conditions. Disabil Rehabil. 2016;38(15):1431-41.
7. Deveci H. Evaluation of the Effectiveness of Treatment with Botulinum Toxin on Sleep Quality in Stroke-Related Spasticity. J Stroke Cerebrovasc Dis. 2020;29(10):105160.
8. Brunnstrom S. editor. Recovery stages and evaluation procedures. Movement therapy in hemiplegia: A neurophysiological approach. New York: Harper and Row;1970.p.34-55.
9. Bohannon RW, Smith MB. Interrater reliability of a modified Ashworth scale of muscle spasticity. Phys Ther. 1987;67(2):206-7.
10. Jensen MP, Karoly P, Braver S. The measurement of clinical pain intensity: a comparison of six methods. Pain. 1986;27(1):117-126.
11. Backhaus J, Junghanns K, Broocks A. Test-retest reliability and validity of the Pittsburgh Sleep Quality Index in primary insomnia. J Psychosom Res. 2002;53(3):737-40.
12. Herdman M, Gudex C, Lloyd A, et al. Development and preliminary testing of the new five-level version of EQ-5D (EQ-5D-5L). Qual Life Res. 2011;20(10):1727–36.
13. Aydemir O. Validity and reliability of the Turkish form of the hospital anxiety and depression scale. Turk Psikiyatri Derg. 1997;8:187-280.
14. Khot SP, Morgenstern LB. Sleep and stroke. Stroke. 2019;50(6):1612-7.
15. Binay VS, Ozbudak SD, Ozkan E. Effects of botulinum toxin serotype A on sleep problems in children with cerebral palsy and on mothers sleep quality and depression. Neurosciences (Riyadh). 2016;21(4):331-7.
16. Trompetto C, Marinelli L, Mori L, et al. Pathophysiology of Spasticity: Implications for Neurorehabilitation. Biomed Res Int. 2014;2014:354906.
17. Dunne JW, Gracies JM, Hayes M. A prospective, multicentre, randomized, double-blind, placebo-controlled trial of onabotulinumtoxinA to treat plantarflexor/invertor overactivity after stroke. Clin Rehabil. 2012;26(9):787-97.
18. Wein T, Esquenazi A, Jost WH. OnabotulinumtoxinA for the treatment of post-stroke distal lower-limb spasticity: a randomized trial. PM R. 2018;10(7):693-703.
19. Sim WS. Application of botulinum toxin in pain management. Korean J Pain. 2011;24(1):1-6.
20. Sycha T, Samal D, Chizh B, et al. A lack of antinociceptive or antiinflammatory effect of botulinum toxin A in an inflammatory human pain model. Anesth Analg. 2006;102(2): 509-16.
21. Davis JC, Falck RS, Best JR. Examining the inter-relations of depression, physical function, and cognition with subjective sleep parameters among stroke survivors: A Cross-sectional Analysis. J Stroke Cerebrovasc Dis. 2019;28(8):2115-23.
22. Xiao M, Huang G, Feng L, et al. Impact of sleep quality on post-stroke anxiety in stroke patients. Brain Behav Dec. 2020;10(12):e01716.
23. Santamato A, Cinone N, Panza F, et al. Botulinum toxin type A for the treatment of lower limb spasticity after stroke. Drugs. 2019;79(2):143-60.
24. Facciorusso S, Spina S, Picelli A, et al. The role of botulinum toxin type-A in spasticity: research trends from a bibliometric analysis. Toxins (Basel). 2024;16(4):184.
25. Nasb M, Li Z, S A Youssef A. Comparison of the effects of modified constraint-induced movement therapy and intensive conventional therapy with a botulinum-A toxin injection on upper limb motor function recovery in patients with stroke. Libyan J Med. 2019;14(1):1609304.
Download attachments: 10.4328.ACAM.22766
Sıdıka Büyükvural Şen, Betül Yavuz Keleş. Sleep quality in stroke patients receiving botulinum toxin treatment for spasticity. Ann Clin Anal Med 2025;16(9):617-622
Citations in Google Scholar: Google Scholar
This work is licensed under a Creative Commons Attribution 4.0 International License. To view a copy of the license, visit https://creativecommons.org/licenses/by-nc/4.0/
The comparison of 24-hour urinary protein excretion and the urine protein creatinine ratio in obese and morbidly obese individuals
Serkan Yaşar, Mehmet Deniz Şahin, İdris Şahin
Department of Internal Medicine, Faculty of Medicine, Inönü University, Malatya, Turkey
DOI: 10.4328/ACAM.22767 Received: 2025-06-06 Accepted: 2025-07-08 Published Online: 2025-07-16 Printed: 2025-09-01 Ann Clin Anal Med 2025;16(9):623-626
Corresponding Author: Serkan Yaşar, Department of Internal Medicine, Faculty of Medicine, Inönü University, Malatya, Turkey. E-mail: drserkanyasar1522@gmail.com P: +90 505 704 11 49 Corresponding Author ORCID ID: https://orcid.org/0000-0002-1465-4131
Other Authors ORCID ID: Mehmet Deniz Şahin, https://orcid.org/0000-0002-3028-912X . İdris Şahin, https://orcid.org/0000-0002-8683-3737
This study was approved by the İnönü University Clinical Research Ethics Committee (Date: 2015-09-16, No: 2015/160,)
Aim: Obesity is a major global health issue and a recognized risk factor for chronic kidney disease (CKD). Proteinuria, a key marker of renal function, is traditionally assessed via 24-hour urine collection, though this poses practical challenges. This study evaluated the correlation between spot urine protein-to-creatinine ratio (PCR) and 24-hour proteinuria in obese individuals, exploring the feasibility of using spot PCR as a simpler alternative.
Materials and Methods: A total of 178 obese individuals (BMI ≥ 35) were included in this prospective study conducted between August 2015 and October 2017. Individuals with conditions potentially affecting renal function were excluded. Proteinuria was assessed through both 24-hour urine collection and spot urine PCR. Correlation analyses were performed using Pearson or Spearman tests based on data distribution.
Results: The mean age of participants was 40.1 ± 11.5 years, with 75.3% being female. Proteinuria levels showed a moderate but statistically significant correlation between spot urine PCR and 24-hour urine protein excretion (p = 0.003, r = 0.223). However, in the subgroup with spot urine proteinuria ≥ 0.2 g/day, the correlation was not statistically significant (p = 0.064). BMI and proteinuria were weakly correlated, with results approaching but not reaching statistical significance (p = 0.051, r² = 0.147).
Discussion: Spot urine PCR demonstrates a moderate correlation with 24-hour urinary protein excretion and may serve as a practical screening tool for proteinuria in obese individuals. However, its reliability in cases of higher proteinuria requires further investigation with larger sample sizes.
Keywords: obesity, chronic kidney disease, protein-to-creatinine ratio
Introduction
Obesity has existed since ancient times and has been regarded as a symbol of wealth, prestige, and even beauty in different eras and regions. However, as the chronic health problems associated with obesity became increasingly recognized, it came to be classified as a disease requiring treatment. According to the World Health Organization (WHO), overweight and obesity are defined as abnormal or excessive fat accumulation that presents a health risk. Despite its well-documented limitations, the World Health Organization (WHO) recommends the use of BMI in field studies on obesity[1,2]. A body mass index (BMI) over 25 is considered overweight, and over 30 is obese [3]. In 2019, an estimated 5 million noncommunicable disease (NCD) deaths were caused by higher-than-optimal BMI [4]. Over the past five decades, the global prevalence of obesity has steadily risen. A comprehensive analysis of body mass index trends among children and adolescents across multiple countries from 1975 to 2016 indicates a universal increase in obesity rates, albeit with some regional variations [5]. Current trends suggest that by 2025, the global prevalence of adult obesity will reach 18% among men and 21% among women [6].
Obesity is linked to numerous health complications, including hyperinsulinemia, lipid metabolism disorders, nonalcoholic fatty liver disease, coronary artery disease, cardiovascular conditions, various cancers, and chronic kidney disease (CKD) [7]. In recent years, the growing prevalence of obesity has contributed to its recognition as a significant risk factor for CKD. Data analysis from the UK Biobank indicated that a genetically estimated 0.06 increase in waist-to-hip ratio raises CKD risk by 30%, while a 5 kg/m² rise in body mass index (BMI) increases the risk by 50% [8]. Similarly, a 14-year cohort study conducted among Korean adults identified both elevated BMI and waist-to-hip ratio at baseline as independent risk factors for CKD development, confirming a strong association between obesity and CKD incidence [9].
In current clinical practice, protein excretion rate (PER) in patients with conditions such as diabetes mellitus, hypertension, and CKD can be assessed using various methods. Traditionally, PER has been assessed through timed urine collections, most commonly over 24 hours [10]. While PER has been regarded as the gold standard for evaluating proteinuria, the requirement to collect urine over an extended duration poses practical challenges. As a result, alternative methods such as the albumin-creatinine ratio (ACR) and total protein-creatinine ratio (PCR) in single, untimed urine samples have gained widespread use [10].
To date, no study in the literature has specifically examined the relationship between the urine PCR in spot urine samples and 24-hour urinary protein excretion in obese and morbidly obese individuals. This study aims to evaluate the correlation between the PCR in spot urine and 24-hour urinary protein excretion, assessing the reliability of the spot urine protein-to-creatinine ratio as a more practical alternative for proteinuria measurement in obese and morbidly obese individuals.
Materials and Methods
Patients and Design
The study included obese individuals aged 18 years and older with a BMI ≥ 35 who visited the Nephrology, Endocrinology, and General Surgery Obesity outpatient clinics of İnönü University Faculty of Medicine between August 7, 2015, and October 11, 2017.
Initially, a comprehensive medical history was obtained, followed by thorough physical examinations of the patients. The history included the patients’ complaints, coexisting systemic diseases (such as diabetes, cardiovascular disease, obstructive sleep apnea syndrome, and hypertension), and current medications, all of which were recorded. Patients with a history of chronic liver disease, chronic or acute renal failure, hypertension, diabetes mellitus, and urinary infection were excluded from the study. Information about the study was provided to the patients, and written informed consent was obtained. In addition to demographic data such as age and gender, BMI, height, and weight were measured and recorded.
Calculation Methods
Blood samples collected from the patients were analyzed for glucose, blood urea nitrogen (BUN), creatinine, uric acid, triglycerides, cholesterol, lactate dehydrogenase (LDH), high-density lipoprotein (HDL), aspartate aminotransferase (AST), alanine aminotransferase (ALT), alkaline phosphatase (ALP), gamma-glutamyl transferase (GGT), and lactate dehydrogenase (LDH) levels.
Additionally, a complete urinalysis was performed, and spot urine samples were analyzed for microprotein and creatinine, while 24-hour urine samples were assessed for proteinuria. Imaging evaluation of the patients was performed using urinary system ultrasonography (USG)
The BMI of the patients was calculated using the TANITA device (TANITA TYPE TFB-300 M, Tokyo, Japan) based on the formula: BMI = Weight (kg) / Height (m²).
Proteinuria levels were measured in spot urine samples and 24-hour urine collections using the turbidimetric method with the Abbott ARCHITECT C16000 device (Abbott Laboratories, Abbott Park, IL, USA).
Patients began the urine collection process by excluding the first morning urine and continued to collect all urine until the same time the next day. The first morning urine of the second day was included in the collection, and the procedure was then completed.
Statistical Analysis
A power analysis determined that a minimum sample size of 42 participants was required to compare proteinuria levels in 24-hour urine samples and spot urine samples in obese patients, with a 95% confidence level and a 3% margin of error.
Statistical analysis of the research data was performed using SPSS for Windows version 22.0. Quantitative data were expressed as mean ± standard deviation (SD), while qualitative data were presented as counts and percentages (%). The Kolmogorov-Smirnov test was used to assess the normality of data distribution. For correlation analysis, Pearson’s correlation test was applied to normally distributed data, while Spearman-Rank correlation test was used for non-normally distributed data. A p-value of ≤ 0.05 was considered statistically significant.
Ethical Approval
This study was approved by the İnönü University Clinical Research Ethics Committee (Date: 2015-09-16, No: 2015/160,).
Results
The study included 178 obese individuals, of whom 134 (75.3%) were female. The mean age of the participants was 40.1 ± 11.5 years (range: 18–74 years), with a mean weight of 120.9 ± 18.9 kg (range: 80.5–179.5 kg) and a mean height of 162.5 ± 9.9 cm (range: 124–193 cm).
Proteinuria levels in 24-hour urine and spot urine samples were analyzed about BMI. The distribution of proteinuria among participants revealed that 136 (76%) individuals exhibited proteinuria levels below 150 mg/day, 33(18.5%) had minimally elevated levels (150–500 mg/day), 7 (3.9%)displayed non-nephrotic levels (500–3500 mg/day), and 1(0.5%) individual had nephrotic-range proteinuria (> 3500 mg/day). Proteinuria averages according to BMI are presented in Table 1.
The correlation between BMI and proteinuria was assessed, yielding a p-value of 0.051 and an r² value of 0.147. Although the p-value narrowly exceeded the threshold for statistical significance, it was hypothesized that a larger sample size might yield statistically significant results.
A correlation analysis was also performed to compare proteinuria levels derived from spot urine samples and 24-hour urine collections. This analysis demonstrated a statistically significant moderate correlation (p = 0.003, r = 0.223). Furthermore, for individuals with a spot urine proteinuria level of ≥ 0.2 g/day (n = 22), a comparison between proteinuria levels calculated from 24-hour urine and the spot urine protein/creatinine ratio revealed no statistically significant correlation (p = 0.064).
Discussion
Obesity is becoming an increasingly significant public health concern. It is now known that obesity increases renal protein and microalbumin excretion, even in the absence of comorbid conditions such as diabetes mellitus, hypertension, or chronic kidney disease. However, large-scale studies on the prevalence of this condition are lacking, and therefore its exact frequency remains unclear. In our study, we aimed to evaluate whether spot urine samples or 24-hour urinary protein excretion is a more suitable method for measuring proteinuria in obese patients.
In a study conducted by Rosenstock et al. in 2018, the prevalence of proteinuria and albuminuria was found to be 21% and 19.7%, respectively. Among individuals without diabetes mellitus but with hypertension, the prevalence of proteinuria was 22.6%, and albuminuria was 17%. In patients with neither diabetes mellitus nor hypertension, the prevalence of proteinuria and albuminuria was 13.3% and 11%, respectively [11].
Daily urinary protein excretion is measured by collecting 24-hour urine samples, which is considered the gold standard. However, due to the challenges associated with this method, including difficulties in implementation, potential measurement errors from improper urine collection, and lack of standardization in storage conditions, there has been a search for more practical alternatives. One of the more practical alternatives currently used in many diseases to assess proteinuria is the ratio of total protein to creatinine (expressed as mg/mg) in a random urine sample. However, the dipstick method used for measuring microalbuminuria may be considered insensitive in the early stages of increased glomerular permeability, as it is generally not expected to yield a positive result unless protein excretion exceeds 300–500 mg/day.
A cross-sectional study investigating the association between obesity, central obesity, and increased urinary albumin-creatinine ratio included 2,889 participants. The study found an increased risk of elevated urinary albumin-creatinine ratio when comparing overweight and obese individuals to those with normal weight. Similarly, compared to participants with normal waist-to-hip ratios, those with central obesity also showed an increased albumin-creatinine ratio. The study further demonstrated a significant positive association between BMI and elevated albumin-creatinine ratio [12]. In our study, we found a weak correlation between the increase in BMI and proteinuria, which was not statistically significant. We attributed the discrepancy from the literature to our relatively small sample size.
Although there is no study in the literature specifically comparing the reliability of spot urine samples and 24-hour urinary protein excretion for assessing proteinuria in obese patients, studies addressing this issue are available for patients with diabetes mellitus, certain rheumatologic diseases, and preeclampsia. In the study conducted by Demirci et al. on whether the spot urine protein/creatinine ratio could be an alternative to 24-hour urine protein in patients diagnosed with preeclampsia, 211 pregnant women meeting the preeclampsia criteria and 53 pregnant women in the control group were included [13]. A good correlation was found between 24-hour urine protein and the spot urine protein/creatinine ratio (r = 0.758). As a result, the spot urine protein/creatinine ratio was evaluated as a good predictor for proteinuria screening. According to this study, a spot urine protein/creatinine ratio of 1 g or higher appears to have high predictive value for the diagnosis of proteinuria and was concluded to be a rapid test that could be used to avoid delays in diagnosis in preeclamptic patients [13]. In another study conducted by Ralson et al. in a rheumatology clinic, the dipstick test, 24-hour urine quantitative protein measurement, and random urine protein/creatinine ratio were compared. The results showed a strong correlation between 24-hour total protein excretion and the random urine protein/creatinine ratio (r = 0.92, p < 0.001) [12]. In another study conducted by Mahaseth et al. in 2022, a linear relationship was found between the spot urine protein/creatinine ratio and the 24-hour urinary total protein, with a correlation coefficient of 0.877 (p < 0.01) [14]. However, at higher levels of protein excretion (>3.5 grams/day), the correlation was observed to be suboptimal.
However, there are also some studies that contradict these findings. In patients with chronic kidney disease, the study published by Sahu et al. in 2022 demonstrated that spot PCR can be a reliable parameter for the initial diagnosis of proteinuria. However, for follow-up measurements and in cases of proteinuria > 0.5 g/day, this correlation loses its significance, and 24-hour urinary total protein excretion is emphasized as the most accurate measurement [15]. In our study, a correlation analysis was conducted to compare proteinuria levels obtained from spot urine samples and 24-hour urine collections, revealing a statistically significant but moderate correlation (p = 0.003, r = 0.223). However, for individuals with a spot urine proteinuria level of ≥ 0.2 g/day (n = 22), no statistically significant correlation was observed between proteinuria levels calculated from 24-hour urine collections and the spot urine protein/creatinine ratio (p = 0.064). These findings suggest that in obese patients, the assessment of proteinuria using 24-hour urine collection provides a more accurate and reliable measurement compared to the spot urine protein/creatinine ratio.
Limitation
This study is subject to several limitations that warrant consideration, primarily the relatively small sample size, which may constrain the generalizability of the findings and necessitate larger-scale investigations to confirm these results across more diverse populations. Additionally, the reliance on spot urine samples collected at a single time point may not adequately capture temporal fluctuations, whereas repeated measurements would offer a more nuanced understanding of these variations. Notwithstanding these limitations, the present findings provide valuable contributions to the existing body of literature and highlight the necessity for further research in this domain.
Conclusion
Although the use of the spot urine protein/creatinine ratio appears to be reliable in certain conditions, such as rheumatologic diseases and preeclampsia, In our study, no statistically significant correlation was found between 24-hour urine protein levels and the spot urine protein/creatinine ratio in patients with proteinuria ≥ 0.2 g/g. The results of our study indicate that it is important to evaluate the reliability of spot urine protein/creatinine measurements on a disease-specific basis. This study concludes that assessing proteinuria using 24-hour urine protein analysis is more appropriate in obese patients.
Scientific Responsibility Statement
The authors declare that they are responsible for the article’s scientific content including study design, data collection, analysis and interpretation, writing, some of the main line, or all of the preparation and scientific review of the contents and approval of the final version of the article.
Animal and Human Rights Statement
All procedures performed in this study were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki Declaration and its later amendments or compareable ethical standards.
Funding: None
Conflict of Interest
The authors declare that there is no conflict of interest.
References
1. Heyward VH. Practical body composition assessment for children, adults, and older adults. Int J Sport Nutr Exerc Metab. 1998;8(3):285-307.
2. Sweatt K, Garvey WT, Martins C. Strengths and limitations of BMI in the diagnosis of obesity: What is the path forward? Curr Obes Rep. 2024;13(3):584-95.
3. O’Reilly D, Collaboration NRF. Worldwide trends in body-mass index, underweight, overweight, and obesity from 1975 to 2016: A pooled analysis of 2416 population-based measurement studies in 128· 9 million children, adolescents, and adults. The Lancet. 2017; 390(10113):2627–42.
4. Luconi E, Tosi M, Boracchi P, et al. Italian and Middle Eastern adherence to Mediterranean diet about Body Mass Index and non-communicable diseases: Nutritional adequacy of simulated weekly food plans. J Transl Med. 2024;22(703):703
5. Bentham J, Di Cesare M, BIlano V. Worldwide trends in children’s and adolescents’ body mass index, underweight and obesity, in comparison with adults, from 1975 to 2016: a pooled analysis of 2,416 population-based measurement studies with 128.9 million participants. Lancet. 2017;390(10113):2627–42
6. Collaboration NRF. Trends in adult body-mass index in 200 countries from 1975 to 2014: A pooled analysis of 1698 population-based measurement studies with 19· 2 million participants. Lancet (London, England). 2016;387(10026):1377–96.
7. Hao M, Lv Y, Liu S. The new challenge of obesity-obesity-associated nephropathy. Diabetes Metab Syndr Obes. 2024;17:1957-71
8. Zhu P, Herrington WG, Haynes R, et al. Conventional and genetic evidence on the association between adiposity and CKD. J Am Soc Nephrol. 2021;32(1):127-37.
9. Zhang H-S, An S, Ahn C. Obesity measures at baseline, their trajectories over time, and the incidence of chronic kidney disease: A 14-year cohort study among Korean adults. Nutr Metab Cardiovasc Dis. 2021;31(1):782-92.
10. Rodby RA. Timed urine collections for albumin and protein: “The king is dead, long live the king!”. Am J Kidney Dis. 2016;68(6):836-8.
11. Rosenstock JL, Pommier M, Stoffels G. Prevalence of proteinuria and albuminuria in an obese population and associated risk factors. Front Med (Lausanne). 2018;5(122):122.
12. Ralston SH, Caine N, Richards I. Screening for proteinuria in a rheumatology clinic: comparison of dipstick testing, 24-hour urine quantitative protein, and protein/creatinine ratio in random urine samples. Ann Rheum Dis. 1988;47(1):759-63.
13. Demirci O, Kumru P, Arınkan A, et al. Spot protein/creatinine ratio in preeclampsia as an alternative for 24-hour urine protein. Balkan Med J. 2015;32(1):51-5.
14. Mahaseth A, Pahari B, Ghimire M. To compare spot urine protein-creatinine ratio within 24 hours urine protein for quantification of proteinuria – a hospital-based cross-sectional study. 2023;34(6):548-57.
15. Sahu S, John J, Augusty A. Estimation of 24 hours urine protein versus spot urine protein creatinine ratio in patients with kidney disease. Indian J Clin Biochem. 2022;37(3):361-4.
Download attachments: 10.4328/ACAM.22767
Serkan Yaşar, İdris Şahin, Mehmet Deniz Şahin. The comparison of 24-hour urinary protein excretion and the urine protein creatinine ratio in obese and morbidly obese individuals. Ann Clin Anal Med 2025;16(9):623-626
Citations in Google Scholar: Google Scholar
This work is licensed under a Creative Commons Attribution 4.0 International License. To view a copy of the license, visit https://creativecommons.org/licenses/by-nc/4.0/
Periorbital effects of antiglaucoma eye drops: A comparative analysis of prostaglandin analogs and other agents
Emine Savran Elibol, Sezer Hacıağaoğlu
Department of Ophthalmology, Faculty of Medicine, Bahcesehir University, Istanbul, Turkey
DOI: 10.4328/ACAM.22780 Received: 2025-07-02 Accepted: 2025-08-19 Published Online: 2025-08-30 Printed: 2025-09-01 Ann Clin Anal Med 2025;16(9):627-631
Corresponding Author: Emine Savran Elibol, Department of Ophthalmology, Faculty of Medicine, Bahcesehir University, Istanbul, Turkey. E-mail: eminesavran@yahoo.com.tr P: +90 530 823 72 22 Corresponding Author ORCID ID: https://orcid.org/0000-0001-8988-8832
Other Authors ORCID ID: Sezer Hacıağaoğlu, https://orcid.org/0000-0003-3634-5433
This study was approved by the Ethics Committee of Bahçeşehir University (Date: 2025-06-18, No: 2025-10/01).
Aim: Prostaglandin analogs (PGAs), which are widely used in the treatment of glaucoma, may lead to periorbital changes collectively referred to as prostaglandin-associated periorbitopathy syndrome (PAPS). This study aims to compare the effects of different antiglaucoma eye drops and their durations of use on the periorbital region.
Materials and Methods: In this cross-sectional clinical study, patients diagnosed with glaucoma who were using antiglaucoma eye drops were evaluated. The patients were divided into three groups: those using PGAs (Group 1), those using non-PGA medications (Group 2), and those not using any medication (Group 3). Periorbital measurements, including margin reflex distance 1 (MRD1), levator function (LF), inferior scleral show, deepening of the upper eyelid sulcus (DUES), malar edema, and lower eyelid fat herniation, were assessed.
Results: MRD1 was significantly lower in Group 1 than in the other groups (p = 0.0001). LF was lowest in Group 1, higher in Group 2, and highest in Group 3 (p = 0.0001). DUES was most prominent in Group 1, less pronounced in Group 2, and least observed in Group 3 (p = 0.0001). As the duration of medication use increased, a decrease in MRD1 and LF was observed in both Group 1 (p = 0.0001 and p = 0.0001, respectively) and Group 2 (p = 0.001 and p = 0.0001, respectively). While DUES became more prominent with increased medication duration in Group 1 (p = 0.007).
Discussion: PGAs were associated with a decrease in MRD1 and LF values, leading to ptosis and DUES. Other antiglaucoma eye drops did not cause significant ptosis but were linked to certain periorbital changes.
Keywords: prostaglandin A, blepharoptosis, eyelid diseases, orbital diseases
Introduction
Glaucoma is the second leading cause of irreversible blindness worldwide [1]. Glaucoma-associated progressive optic neuropathy leads to a reduction in visual acuity and constriction of visual fields [1]. The standard treatment for glaucoma is the control of intraocular pressure (IOP) with topical anti-glaucoma medications. These topical drugs require multiple dosing and long-term use. However, ocular surface toxicity and tear film abnormalities caused by these medications present a serious issue, affecting the entire ocular surface, including the conjunctiva, cornea, and eyelids [2]. Topical prostaglandin analogs (PGAs; also known as prostanoid prostaglandin F receptor agonists) are commonly preferred as first-line medical treatments for glaucoma patients. However, the use of these medications has been reported to lead to progressive periorbital changes and the development of periorbital side effects known as prostaglandin-associated periorbital syndrome (PAPS) [3,4]. The commonly reported signs and symptoms of PAPS include hyperpigmentation of the periorbital skin, trichomegaly and hypertrichosis of the eyelashes, deepening of the upper eyelid sulcus, flattening of the lower eyelid bags, upper eyelid ptosis, mild enophthalmos, inferior scleral visibility, and involution of dermatochalasis [3]. Although the mechanism of PAPS is not completely understood, it is believed that the primary pathophysiology involves orbital fat atrophy, which is thought to occur through the inhibition of adipogenesis via stimulation of Prostaglandin F receptors in adipocytes [5]. It has even been found that early-onset periorbopathy can develop within just one month of starting PGA use [6]. The most prominent feature of PAPS is the development, deepening of the upper eyelid sulcus (DUES) [7–9]. While PGAs can cause significant changes in the periorbital tissues, there is limited information regarding such effects with non-PGA glaucoma drops. The periorbital effects of other glaucoma drops are less pronounced and require further investigation.
The aim of our study is to analyze the changes observed in the periorbital region in patients using anti-glaucoma drops, based on the type of medication and duration of use.
Materials and Methods
This study was designed as a single-center, cross-sectional study and was conducted between June 2025 and July 2025 at the Department of Ophthalmology, Dünyagöz Ataşehir Hospital. This study included both eyes of patients diagnosed with glaucoma who had been using at least one anti-glaucomatous eye drop for at least one year. Patients using PGA drops (latanoprost, travoprost, bimatoprost, or tafluprost) either alone or in combination with other drops were classified as Group 1. Patients using non-PGA antiglaucomatous drops (adrenergic agonists, beta-blockers, carbonic anhydrase inhibitors) were classified as Group 2. The control group (Group 3) consisted of glaucoma patients who were not using any eye drops.
The age, gender, treatment duration, and the specific glaucoma medications used by each patient were recorded. As the study was conducted at a tertiary healthcare center and referral hospital, many patients were using multiple anti-glaucoma medications, with the most common being the combination of PGA and other eye drops. Therefore, in the study, the effects of anti-glaucoma drops on the periorbital region were compared between patients using a combination of medications with and without PGA, either as a single or combined treatment. To exclude conditions that could contribute to periorbital findings, a detailed patient history was obtained, and a physical examination was conducted.
Exclusion criteria included individuals with a known allergy to the anti-glaucoma medication used, those who had undergone ocular surgery within the last 6 months before the study, those with a history of any ocular plastic surgery, individuals with a history of eyelid trauma, those with a known history of exophthalmos or enophthalmos prior to anti-glaucoma treatment, patients with orbital and adnexal fractures, contact lens users, those with thyroid orbitopathy, and those with a history of carotid-cavernous fistula or neurological disorders.
The patients enrolled in the study were examined in the Oculoplastic and Orbital Surgery Department. Orbital and adnexal measurements were taken, including the margin-reflex distance 1 (MRD1), levator function (LF), inferior scleral appearance, DUES, malar edema, and lower eyelid fat herniation grades. The lower eyelid position was assessed, and the presence of entropion, ectropion, and trichiasis was recorded. All patients’ photographs were taken with a digital camera (E-PL3 14-42 mm lens; Olympus; China) from a distance of 30 cm, without flash and under ambient light conditions. These photos were analyzed by two oculoplastic specialists (E.S.E., S.H.) to verify the physical examination findings, and when they agreed, the findings were confirmed.
MRD1 measurement was performed to detect blepharoptosis. During the LF measurement, the patient’s eyebrow was stabilized with the researcher’s thumb. The patient was asked to look up and then look down. The eyelid movement range was measured using a ruler. The inferior scleral appearance was assessed by considering the contact between the lower eyelid and the limbus in a smooth contour as 0 mm, while the distance between the lower eyelid and the lower limbus was measured in millimeters. DUES was graded using a modified version of previous methods, based on the relationship of the upper sulcus with the upper orbital rim. A scale from 1 to 3 was used [10]. Malar edema was graded according to the classification by Lam and colleagues [11]. Lower eyelid fat herniation was classified by Liu and colleagues [12].
Statistical Method
Statistical analysis for this study was performed using IBM SPSS version 26.0 software. Descriptive statistics (frequency, percentage, median, interquartile range (IQR), and minimum-maximum values) were calculated for the demographic data and eyelid measurements of the participants, grouped by glaucoma groups and the control group. The Chi-square test was used for comparisons of categorical variables. For intergroup comparisons of all numerical parameters, the Kruskal-Wallis test was applied, as the parameters did not follow a normal distribution. The Mann-Whitney U test was used for post hoc comparisons. The normality of the parameters was assessed using the Shapiro-Wilk test. All statistical analyses were conducted with a 95% confidence interval, and significance was considered at p < 0.05.
Ethical Approval
This study was approved by the Ethics Committee of Bahçeşehir University (Date: 2025-06-18, No: 2025-10/01).
Results
A total of 110 eyes using PGA drops for glaucoma (Group 1), 54 eyes using non-PGA anti-glaucoma drops (Group 2), and 192 eyes with no drop usage (Group 3) were included in the study. There was no statistically significant difference between the groups in terms of age (p = 0.385), gender (p = 0.682), and visual acuity in LogMAR (p = 0.657).
There is a statistically significant difference in MRD1 between the groups (p = 0.0001). MRD1 in Group 1 (2.76 ± 0.99) is significantly lower than in Group 2 (3.19 ± 0.77) (p = 0.008). Furthermore, MRD1 values in Group 1 (2.76 ± 0.99) are significantly lower than those in Group 3 (3.37 ± 0.86) (p = 0.0001) (Table 1).
There is a statistically significant difference in LF between the groups (p = 0.0001). LF is the lowest in Group 1 and the highest in Group 3 (Table 1).
There is a statistically significant difference in inferior scleral appearance between the groups (p = 0.0001). There is a significant difference between Group 1 and Group 3 (p = 0.0001) as well as between Group 2 and Group 3 (p = 0.0001). The inferior scleral appearance in patients using anti-glaucomatous eye drops (Groups 1 and 2) was measured as higher compared to Group 3, which did not use eye drops (Table 1). There is a statistically significant difference in lower eyelid fat herniation between the groups (p = 0.006). A significant difference was observed between Group 1 and Group 3 (p = 0.035) and between Group 2 and Group 3 (p = 0.003). Lower eyelid fat herniation in patients using anti-glaucomatous eye drops (Groups 1 and 2) was measured as higher compared to Group 3, which did not use eye drops (Table 1).
There is a statistically significant difference in DUES between the groups (p = 0.0001). The highest upper eyelid sulcus deformity values were observed in Group 1, while the lowest values were observed in Group 3. A statistically significant difference was found between all groups (Group 1 vs Group 3, p = 0.0001; Group 2 vs Group 3, p = 0.0001; Group 1 vs Group 2, p = 0.0001) (Table 1).
There is a statistically significant difference in malar edema between the groups (p = 0.0001). A significant difference was found between Group 1 and Group 3 (p = 0.0001) and between Group 2 and Group 3 (p = 0.0001). Malar edema values in patients using antiglaucomatous eye drops (Groups 1 and 2) were higher compared to Group 3, which did not use eye drops (Table 1).
None of the patients showed entropion or trichiasis. However, there was no statistically significant difference between the groups in terms of ectropion (p = 0.118) (Table 1).
In Group 1, there is a statistically significant difference in MRD1 and LF measurements according to the duration of medication use (p = 0.0001, p = 0.0001, respectively). As the duration of medication use increases, MRD1 and LF values decrease (Table 2).
In Group 1, there is a statistically significant difference in lower eyelid fat herniation and upper eyelid sulcus deformity values according to the duration of medication use (p = 0.006, p = 0.007, respectively). Patients who have used PGAs for more than 10 years showed an increase in lower eyelid fat herniation and DUES compared to those who used them for less than 10 years (Table 2).
In Group 1, there is no statistically significant difference in inferior scleral appearance and malar edema values according to the duration of medication use (p = 0.107, p = 0.392, respectively) (Table 2).
In Group 1, there is no statistically significant difference in ectropion based on the duration of medication use (p = 0.201) (Table 2).
In Group 2, there is a statistically significant difference in MRD1 and LF measurements according to the duration of medication use (p = 0.001, p = 0.0001, respectively). MRD1 and LF measurements are significantly lower in the 6-10 year group compared to the 0-5 year group (p = 0.001, p = 0.0001, respectively). Similarly, in the 11+ years group, MRD1 and LF values are significantly lower compared to the 0-5 year group (p = 0.006, p = 0.0001, respectively) (Table 3).
In Group 2, there is a statistically significant difference in the inferior eyelid fat herniation values according to the duration of medication use (p = 0.008). This difference is due to the higher inferior eyelid fat herniation values in the 6-10 year medication use group compared to the 0-5 year group (p = 0.003) (Table 3).
In Group 2, there is no statistically significant difference in inferior scleral appearance, DUES, and malar edema values according to the duration of medication use (p = 0.608, p = 0.205, p = 0.686, respectively) (Table 3).
In Group 2, there is no statistically significant difference in ectropion according to the duration of medication use (p = 0.099) (Table 3).
Discussion
In this study, glaucoma patients using either combined or single PGA, those using other antiglaucomatous drops (excluding PGA), and those not using any eye drops were compared in terms of periorbital changes.
Antiglaucomatous eye drops, especially PGAs, can cause various changes around the eyes. These changes are known as PAPS and are observed as hyperpigmentation, ptosis, deepening of the upper eyelid sulcus, and periorbital fat loss [3,4,7]. Periorbital and ocular surface changes have also been observed with other antiglucomatous drops, excluding PGAs [2,13,14]. Topical antiglaucoma drops like beta-blockers, apraclonidine, brimonidine, and dorzolamide can cause periorbital dermatitis, allergic edema, and fibrosis, potentially leading to reversible or scarred eyelid ectropion, especially in predisposed individuals [2,13–16]. PGAs are characterized by their strong intraocular pressure (IOP) lowering effects, minimal systemic side effects, and a once-daily dosing regimen, which contributes to good patient compliance [7,17]. Among these agents are latanoprost (0.005%), travoprost (0.004%), bimatoprost (0.01% and 0.03%), and tafluprost (0.0015%). Various methods have been suggested in the literature for grading PAPS [8,18]. In one study, the prevalence of PAPS (defined as patients with deepening of the upper eyelid sulcus + at least three additional clinical signs) was found to be over 40%. It was particularly determined that patients over 60 years old have a threefold increased risk of developing PAPS [19].
Factors contributing to the severity of PAPS include the type of PGA and the duration of use. Risk factors for the development of PAPS include age (with a particularly significant relationship in individuals over 60 years old), the technique of drop application, and the duration of use [7,17,19–22].
Consistent with the literature, in this study, the MRD1 value was significantly lower in the PGA group compared to the other groups. The use of non-PGA anti-glaucomatous eye drops did not show a significant effect on MRD1 value or ptosis. Similarly, the LF value was found to be lower in patients using anti-glaucomatous eye drops, with or without PGA, compared to those not using any drops. No established relationship between LF and anti-glaucomatous drops has been demonstrated in the literature.
In our study, inferior scleral show, lower eyelid fat herniation, and malar edema were more common in patients using antiglaucomatous drops, possibly due to mechanical trauma during drop application—particularly with beta-blockers, dorzolamide, and brinzolamide [13,14]. These findings, though not classic components of PAPS, were not observed in non-drop users. Inferior scleral show was not increased in the PGA group, likely due to the lack of distinction between bimatoprost and other PGAs. As expected, upper eyelid sulcus deformity was most prominent in the PGA group and was associated with longer duration of drop use [23].
In this study, MRD1 values in the group using PGAs decreased as the duration of use increased. In other words, the longer the duration of drop usage, the greater the development of pitosis. Similarly, LF measurements decreased with increasing drop usage duration. We believe that fibrosis develops in the levator muscle with prolonged use of anti-glaucomatous drops. Supporting this argument, a study demonstrated that acquired blepharoptosis occurs as a combined effect of degenerative changes in the levator palpebrae superioris muscle and its aponeurosis, and a significant relationship was found between the severity of blepharoptosis and the reduction in levator muscle function [24].
DUES was found to be significantly more severe in patients using PGA for 11 years or more. Literature reports that upper eyelid sulcus deformity can begin as early as 3-6 months after starting treatment. The reason we observed this finding in the 0-5 year group in our study may be due to the lack of subgrouping by specific PGA medications. As is well known, upper eyelid sulcus deformity has most frequently been reported with bimatoprost [25].
Limitation
This study has several limitations. First, the sample size was relatively small, which may have limited the ability to detect certain findings, such as ectropion, or to distinguish between different prostaglandin analog (PGA) derivatives. Second, the lack of subgroup analysis for individual medications may have masked drug-specific effects. Third, due to the cross-sectional design of the study, no causal inferences can be made regarding the relationship between the duration of drop use and periorbital changes. Lastly, in Group 1, some patients were using combination therapies containing prostaglandins along with other agents, which may have introduced confounding effects and limited the ability to attribute the observed changes solely to prostaglandin analogs.
Conclusion
In patients using PGAs, significant reductions in MRD1 and LF values were found, which were associated with ptosis and fibrotic changes in the levator muscle. Upper eyelid sulcus deformity became more pronounced, especially with long-term PGA use, and was observed to be severe in patients using PGAs for more than 11 years.
Scientific Responsibility Statement
The authors declare that they are responsible for the article’s scientific content including study design, data collection, analysis and interpretation, writing, some of the main line, or all of the preparation and scientific review of the contents and approval of the final version of the article.
Animal and Human Rights Statement
All procedures performed in this study were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki Declaration and its later amendments or compareable ethical standards.
Funding: None
Conflict of Interest
The authors declare that there is no conflict of interest.
References
1. Tham YC, Li X, Wong TY. Global prevalence of glaucoma and projections of glaucoma burden through 2040: a systematic review and meta‑analysis. Ophthalmology. 2014;121(11):2081–90.
2. Servat JJ, Bernardino CR. Effects of common topical antiglaucoma medications on the ocular surface, eyelids and periorbital tissue. Drugs Aging. 2011;28(1):267–82.
3. Sakata R, Chang PY, Sung KR, et al. Prostaglandin-associated periorbitopathy syndrome (PAPS): addressing an unmet clinical need. Semin Ophthalmol. 2022;37(4):447–54.
4. Berke SJ. PAP: New concerns for prostaglandin use. Rev Ophthalmol. 2012;19(10):70.
5. Taketani Y, Yamagishi R, Fujishiro T. Activation of the prostanoid FP receptor inhibits adipogenesis leading to deepening of the upper eyelid sulcus in prostaglandin-associated periorbitopathy. Invest Ophthalmol Vis Sci. 2014;55(3):1269–76.
6. Kucukevcilioglu M, Bayer A, Uysal Y. Prostaglandin associated periorbitopathy in patients using bimatoprost, latanoprost and travoprost. Clin Exp Ophthalmol. 2014;42(2):126–31.
7. Sakata R, Shirato S, Miyata K. Incidence of deepening of the upper eyelid sulcus on treatment with a tafluprost ophthalmic solution. Jpn J Ophthalmol. 2014;58(2):212–7.
8. Rabinowitz MP, Katz LJ, Moster MR, et al. Unilateral prostaglandin-associated periorbitopathy: a syndrome involving upper eyelid retraction distinguishable from the aging sunken eyelid. Ophthal Plast Reconstr Surg. 2015;31(6):373–8.
9. Sakata R, Shirato S, Miyata K. Recovery from deepening of the upper eyelid sulcus after switching from bimatoprost to latanoprost. Jpn J Ophthalmol. 2013;57(2):179–84.
10. Liang L, Sheha H, Fu Y. Ocular surface morbidity in eyes with senile sunken upper eyelids. Ophthalmology. 2011;118(12):2487–92.
11. Lam SM, Glasgold MJ, Glasgold RA. Complementary fat grafting. Philadelphia: Lippincott Williams & Wilkins; 2006.p.12-25.
12. Liu J, Huang C, Song B. A graded approach in East Asian personalized lower blepharoplasty: a retrospective study spanning 12 years. Indian J Ophthalmol. 2022;70(8):3088–94.
13. Kalavala M, Statham BN. Allergic contact dermatitis from timolol and dorzolamide eye drops. Contact Dermatitis. 2006;54(6):345.
14. Armisen M, Vidal C, Quintans R. Castroviejo M. Allergic contact dermatitis from apraclonidine. Contact Dermatitis. 1998;39(3):193.
15. Britt T, Burnstine MA. Iopidine allergy causing lower eyelid ectropion progressing to cicatricial entropion. Br J Ophthalmol. 1999;83(8):992-3.
16. Grassberger M, Baumruker T, Enz A, et al. A novel anti-inflammatory drug, SDZ ASM 981, for the treatment of skin diseases: in vitro pharmacology. Br J Dermatol. 1999;141(2):264–73.
17. Manju M, Pauly M. Prostaglandin-associated periorbitopathy: a prospective study in Indian eyes. Kerala J Ophthalmol. 2020;32(1):36–40.
18. Tanito M, Ishida A, Ichioka S, et al. Proposal of a simple grading system integrating cosmetic and tonometric aspects of prostaglandin-associated periorbitopathy. Medicine (Baltimore). 2021;100(34):e26874.
19. Patradul C, Tantisevi V, Manassakorn A. Factors related to prostaglandin-associated periorbitopathy in glaucoma patients. Asia Pac J Ophthalmol (Phila). 2017;6(3):238–42.
20. Shah M, Lee G, Lefebvre DR, et al. A cross-sectional survey of the association between bilateral topical prostaglandin analogue use and ocular adnexal features. PLoS One. 2013;8(5):e61638.
21. Kim HW, Choi YJ, Lee KW. Periorbital changes associated with prostaglandin analogs in Korean patients. BMC Ophthalmol. 2017;17(1):126.
22. Davis SA, Sleath B, Carpenter DM. Drop instillation and glaucoma. Curr Opin Ophthalmol. 2018;29(2):171–7.
23. Habchane A, Khatem S, Alioua A, et al. Ptosis following the use of dexamethasone-based eye drops. Sch J Med Case Rep. 2024;12(2):215–7.
24. Pereira LS, Hwang TN, Kersten RC. Levator superioris muscle function in involutional blepharoptosis. Am J Ophthalmol. 2008;145(6):1095–8.
25. Li W, Chen X, Chen S. Changes in prostaglandin-associated periorbital syndrome: a self-controlled and prospective study. Cutan Ocul Toxicol. 2025;44(1):35-42.
Download attachments: 10.4328.ACAM.22780
Emine Savran Elibol, Sezer Hacıağaoğlu. Periorbital effects of antiglaucoma eye drops: A comparative analysis of prostaglandin analogs and other agents. Ann Clin Anal Med 2025;16(9):627-631
Citations in Google Scholar: Google Scholar
This work is licensed under a Creative Commons Attribution 4.0 International License. To view a copy of the license, visit https://creativecommons.org/licenses/by-nc/4.0/
Comparison of surgical outcomes of patients with vascularized or nonvascularized grafting in scaphoid nonunion surgery
Mustafa Altıntaş 1, Okan Ateş 1, Ali Özdemir 2
1 Department of Orthopaedics and Traumatology, Gazi Yaşargil Training and Research Hospital, Diyarbakır, 2 Department of Orthopaedic and Traumatology and Hand Surgery, Selcuk Univesity Faculty of Medicine, Konya, Turkiye
DOI: 10.4328/ACAM.22799 Received: 2025-07-03 Accepted: 2025-08-04 Published Online: 2025-08-14 Printed: 2025-09-01 Ann Clin Anal Med 2025;16(9):632-636
Corresponding Author: Mustafa Altıntaş, Department of Orthopaedics and Traumatology, Gazi Yaşargil Training and Research Hospital, Diyarbakır, Turkiye. E-mail: drgoldstone4721@gmail.com P: +90 412 258 00 60 Corresponding Author ORCID ID: https://orcid.org/0000-0003-1272-7648
Other Authors ORCID ID: Okan Ateş, https://orcid.org/0000-0002-4534-4101 . Ali Özdemir, https://orcid.org/0000-0002-8835-9741
This study was approved by the Ethics Committee of Gazi Yaşargil Training and Research Hospital (Date: 2023-05-26, No: 410)
Aim: We compare the results of scaphoid fracture nonunion surgeries involving vascularized bone grafting (VBG) or nonvascularized bone grafting (NVBG).
Materials and Methods: This study was conducted with 34 patients with scaphoid fracture nonunion. Patients were divided into two groups, including those treated with VBG (n=17) and those treated with NVBG (n=17).
Results: Union rates did not differ significantly between the VBG and NVBG groups (p = 0.335). There was also no significant difference in visual analogue scale (VAS) or Disabilities of the Arm, Shoulder, and Hand (DASH) scores between the two groups. However, Mayo scores were significantly higher in the VBG group compared to the NVBG group (79.41±13.57 versus 64.41±25.67; p=0.027).
Discussion: In this study, union rates in patients who underwent scaphoid fracture nonunion surgery with VBG or NVBG were compatible with the literature, and both methods were found to be reliable. There was no statistical difference between them.
Keywords: scaphoid, nonunion, vascularized bone graft, nonvascularized bone graft
Introduction
Scaphoid fractures are the most common type of carpal bone fracture, accounting for 60% of all carpal bone fractures. Injuries of the scaphoid bone are not easy to diagnose or treat because of the bone’s complex 3-dimensional structure. The scaphoid bone anatomically forms a connection between the proximal and distal carpal rows and the distal radius; therefore, it is subjected to high mechanical stress [1,2]. Approximately 80% of the scaphoid surface is covered by cartilage. The blood supply comes from the dorsal carpal branch of the radial artery, which accounts for 70-80% of the blood supply to the scaphoid and enters the bone from the distal to proximal direction. The superficial palmar branch of the radial artery provides 20-30% of the blood supply to the scaphoid, primarily to the distal scaphoid. Therefore, the proximal scaphoid has poor blood supply, contributing to its longer healing time and higher nonunion rate [3].
Nonunion after scaphoid fractures occurs in 2-15% of cases but may reach rates of 30% when the fracture is located at the proximal pole. Other risk factors include avascular necrosis at the proximal pole, unreduced or unstable fracture, a delay of more than 4 weeks in the treatment of the fracture, and active smoking [4]. Despite the many studies conducted to date, there is no consensus on the optimal treatment of scaphoid nonunion. The general tendency in treatment is a combination of bone grafting and internal fixation [5]. However, there are different opinions on the choice of vascularized bone grafting (VBG) versus nonvascularized bone grafting (NVBG). For example, VBG has been reported to be a better treatment option because of its faster healing process, shorter immobilization time, and higher chance of stability due to the activity of living cells that provide nutrients to the bone structure [6,7]. However, VBG is technically more difficult than NVBG, requires microsurgical techniques, and results in long surgical time and donor site morbidity [8].
The aim of this study was to compare the treatment results of cases of scaphoid nonunion treated surgically with VBG or NVBG.
Materials and Methods
Institutional and researcher approval was obtained for this study. Ethical approval was granted by the ethics committee of the hospital where all imaging and patient procedures were performed in a single center. The study had no financial incentive. All procedures were performed in accordance with the ethical rules and principles of the Declaration of Helsinki.
In this retrospective study, patients over 18 years of age with at least 1 year of follow-up at a Level 1 trauma center between September 2020 and March 2023 with a diagnosis of scaphoid nonunion in the upper extremity were retrospectively analyzed. A total of 34 patients were included in the study, 32 of whom were male, while 2 were female. Eight patients who could not be reached for follow-up were excluded. All patients were diagnosed by physical examination followed by direct radiographs. After the diagnosis, each patient was evaluated with 3-dimensional computed tomography to determine the fracture structure and fracture fragment size. All patients underwent preoperative magnetic resonance imaging for the diagnosis of avascular necrosis. Exclusion criteria were previous wrist surgery, carpal bone fractures other than those of the scaphoid, neurological or systemic inflammatory diseases, and severe arthritis. Patients who met the criteria were called to the hospital for follow-up, and necessary tests and examinations were performed. These patients were divided into two groups, which respectively included 17 patients treated by VBG from the volar carpal artery and 17 patients treated by NVBG from the distal radius. The demographic characteristics, time from fracture to surgery, smoking status, and Herbert and Fisher classification of the patients were recorded preoperatively. All patients were evaluated postoperatively and at 1, 3, 6, and 12 months with wrist radiographs. The radiologically confirmed formation of at least 3 cortical calluses at the fracture line was considered evidence of consolidation of the fracture. All surgical procedures were performed by the same senior surgeon. The reason why such a high number of patients can be treated by a single hand surgeon is that a single doctor is the only hand surgeon who treats 5 million people in 6 cities. Preoperative and postoperative evaluations were performed by the same specialist.
Surgical Techniques
NVBG Surgical Technique
With regional block anesthesia, the extremity was drained of blood with an elastic bandage, and the tourniquet applied to the arm was inflated to 100 mmHg higher than the systolic blood pressure. The flexor carpi radialis sheath was opened with a volar incision, and the tendon was excised from the ulnar aspect. The wrist capsule was accessed, and the joint capsule was opened with a Z incision from the volar side. The scaphoid fracture was reached, the fibrotic and sclerotic nonunion tissue along the fracture line was removed with the help of a curette and burr, and the blood-supplying bone tissues were accessed. The nonvascularized bone graft was then taken from the volar aspect of the radius proximal to the incision and placed on the fracture line, and the fracture was fixed with one headless compression screw (3.5-mm headless screw, Zimed, Ankara, Turkey) directed from the distal to the proximal under fluoroscopic control. The joint capsule was closed. The skin was then closed, and the operation was terminated with a short arm splint.
VBG Surgical Technique
With regional block anesthesia, the tourniquet applied to the arm was inflated to 100 mmHg higher than the systolic blood pressure without draining the blood from the extremity. Under 2.5-fold loop magnification, the flexor carpi radialis sheath was opened with a volar incision, and the tendon was excised from the ulnar aspect. The wrist capsule was accessed, and the joint capsule was opened with a Z incision from the volar side. The scaphoid fracture was reached, the fibrotic and sclerotic nonunion tissue along the fracture line was removed with the help of a curette and burr, and the blood-supplying bone tissues were accessed. The volar carpal artery, running parallel to the wrist joint, was then identified from the proximal part of the incision, from the volar aspect of the radius to the distal of the pronator quadratus muscle. The artery was followed toward the ulnar side, and the vascular bone graft was removed with the help of thin osteotomes from the area in which it entered the radius on the ulnar side (Figure 1). The proximal pedicle was carefully lifted over the bone and dissected sufficiently to reach the fracture line. The vascular bone graft was placed on the fracture line and, under fluoroscopic control, the fracture was fixed with one headless compression screw (3.5-mm headless screw, Zimed) directed from the distal to proximal (Figure 2,3). The joint capsule was closed. The skin was then closed, and the operation was terminated with a short arm splint.
Postoperative union time, nonunion status, cast duration, postoperative 1-year visual analogue scale (VAS) score, modified Mayo wrist score [9], and Disabilities of the Arm, Shoulder, and Hand (DASH) Questionnaire score [10] were analyzed for all patients. The Mayo wrist score assesses the presence and intensity of pain, range of motion, and hand grip strength, expressed as percentage points compared to the uninjured hand, as well as functional status in terms of activity performance. Outcome scoring is as follows: 90-100, excellent; 80-90, good; 60-80, satisfactory, and < 60, poor. The DASH Questionnaire consists of 30 questions that evaluate the patient’s ability to perform daily activities in the past week, regardless of which hand he or she uses. Scores range from 0, indicating no disability, to 100, indicating complete disability. Postoperatively, patients received short arm casts, which were removed at week 8. Finger movements were performed until the casts were removed. Rehabilitation was started after cast removal. In rehabilitation, passive assisted wrist movements were performed until radiological union was confirmed, and then active movements were started after radiological union. Stretching exercises were performed with a physiotherapist by patients with restricted mobility.
Statistical Evaluation
The statistical analysis of this study was performed with NCSS (Number Cruncher Statistical System) 2007 Statistical Software (NCSS, LLC, Kaysville, UT, USA). In the evaluation of the data, in addition to the use of descriptive statistical methods with mean, standard deviation (SD), median, and interquartile range (IQR) values, the distribution of variables was examined with the Shapiro-Wilk normality test, the independent t-test was used in the comparison of paired groups of normally distributed variables, the Mann-Whitney U test was used in the comparison of paired groups of variables that did not show normal distribution, and the Chi-Square and Fisher exact test were used in the comparison of qualitative data. Results were evaluated at a significance level of p < 0.05.
Ethical Approval
This study was approved by the Ethics Committee of Gazi Yaşargil Training and Research Hospital (Date: 2023-05-26, No: 410).
Results
Of the 34 analyzed patients, 32 (94%) were male and 2 (6%) were female. The ages of the patients ranged between 18-37 years, and the mean age (±SD) was 22.71 ± 4.92 years in the VBG group and 25.47 ± 5.40 years in the NVBG group. There were no significant differences between the groups in terms of demographic characteristics (p > 0.05), as shown in Table 1. The mean delay between fracture and surgery was similar between the groups (8.24 ± 3.11 in the VBG group and 9.06 ± 5.33 in the NVBG group; p > 0.05). The Herbert-Fisher class was D1 for 5 patients and D2 for 12 patients in the VBG group; similarly, it was D1 for 5 patients and D2 for 12 patients in the NVBG group. No significant differences were found between the groups in terms of smoking status, cast use, or follow-up duration.
A total of 29 patients achieved union. Union was not achieved by 1 patient (5%) in the VBG group and 4 patients (23%) in the NVBG group. Although union was achieved by more patients in the VBG group, the difference between the groups was not statistically significant (p > 0.05). No complications were encountered in either group. The results of the patients who had achieved union at the end of the follow-up period are shown in Table 1. There were no significant differences between the groups in terms of VAS or DASH scores (p > 0.05). For the Mayo wrist score, the mean was 79.41 ± 13.57 and the median (IQR) was 80 (75-87.5) in the VBG group compared to 64.41 ± 25.67 and 75 (45-82.5) in the NVBG group, respectively. Thus, Mayo wrist scores were statistically significantly higher in the VBG group than the NVBG group (p = 0.027). No significant complications such as infection, loosening/dislocation of screws, loss of fracture reduction, or graft failure were observed during treatment.
Discussion
According to the data obtained in our study, the vascularity of the bone graft does not have a significant effect on the results of scaphoid fracture nonunion surgery. However, VBG dissection is technically more difficult and requires surgical experience. In these cases, dissection must be performed using special equipment such as a loop or microscope. In addition, even if the graft is placed with a pedicle, it is questionable how long a thin pedicle can maintain the blood supply after wound closure. Surgeries with NVBG are technically easier. There are also no concerns regarding the preservation of graft vascularity.
Various treatments for scaphoid fractures have been described according to anatomical localizations and stages. In the present study, we compared the results of scaphoid fracture surgery with volar radial artery flaps and cancellous grafts from the distal radius. No clinical or functional differences were found among our patients operated on for the nonunion of scaphoid fractures, other than the Mayo wrist score. We do not think that the difference is significant. The similar results for the union observed in our patient groups are consistent with those reported in the literature. In our study, the union rates in the VBG and NVBG groups were 94% and 76%, respectively; however, no significant difference was observed in the healing process.
Different results have been reported in the literature in comparisons of scaphoid surgeries performed with VBG and NVBG. In a study of 73 patients, Hirche et al. [11] obtained union in 21 of 28 patients (75%) in the vascularized graft group and in 37 of 45 patients (82%) in the nonvascularized group, reporting no difference between the groups. In a study that included 31 patients, Aibinder et al. [12] obtained a union rate of 79% in patients with vascularized grafts and 71% in patients with nonvascularized grafts from the iliac wing. They found no significant difference between these groups. Matic et al. [13] compared xenografts and vascularized grafts in a study of 30 patients, obtaining union in 13 of 15 patients (86.7%) in the vascularized group and 12 of 15 patients (80%) in the xenograft group. They found no significant difference between the groups. Delamarre et al. [14] reported good union rates in all subgroups of their study, which included a total of 60 patients, and found no difference between the groups. In an older study from 2008, Kawamura et al. [15] reported that the use of vascularized grafting increased the union rate and speed. There are many studies and meta-analyses in the literature on scaphoid fractures, and many recent studies have reported no difference in outcomes when different types of grafts are used. However, some studies have reported that union rates are higher with VBG. As a more objective evaluation, Baamir et al. [8] reported no difference between grafting methods in their umbrella review study. In our study, we did not find any difference between grafting methods, and the fact that our results are compatible with those of previous studies in the literature increases the reliability of our findings. Although a better union rate may theoretically be obtained with vascularized grafts, these results show no difference between graft types.
In functional evaluations, Hirche et al. [11] found no difference between VBG and NVBG groups in terms of range of motion or Mayo wrist scores. In their meta-analysis, Zhang et al. [6] reported that although there were differences in union times between graft groups, a resulting difference in terms of functional outcome was not observed. From a functional point of view, it is not surprising to obtain similar functional results since the union rates and times are similar between the groups.
Limitations
The most important limitations of our study were its small sample size and retrospective nature. Another limitation was the lack of preoperative functional data and comparison with the respective extremity of the non-injured side. It was not possible to determine the exact time of fracture union because of the prolonged follow-up of the patients after 3 months. Furthermore, 8 patients being lost to follow-up is another limitation.
Conclusion
In this study, the union rates of patients who underwent scaphoid fracture nonunion surgery with VBG and NVBG were compatible with rates previously reported in the literature, and no difference was found between the two types of grafts. While Mayo wrist scores were better in the VBG group compared to the NVBG group, there were no differences between the groups in terms of VAS or DASH scores. Although our study has some limitations, having all surgeries performed by a single surgeon makes the study quite strong in terms of homogenizing the results of the groups. The promising surgical results of patients who underwent fracture surgery with both VBG and NVBG suggest that these are reliable treatment options for patients with scaphoid nonunion.
Scientific Responsibility Statement
The authors declare that they are responsible for the article’s scientific content including study design, data collection, analysis and interpretation, writing, some of the main line, or all of the preparation and scientific review of the contents and approval of the final version of the article.
Animal and Human Rights Statement
All procedures performed in this study were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki Declaration and its later amendments or compareable ethical standards.
Funding: None
Conflict of Interest
The authors declare that there is no conflict of interest.
References
1. Almigdad A, Al-Zoubi A, Mustafa A, et al. A review of scaphoid fracture, treatment outcomes, and consequences. Int Orthop. 2024;48(2):529-36.
2. Langer MF, Unglaub F, Breiter S. Anatomie und pathobiomechanik des skaphoids. [Anatomy and pathobiomechanics of the scaphoid.] Unfallchirurg. 2019;122(3):170-81.
3. Dias JJ, Brenkel IJ, Finlay DB. Patterns of union in fractures of the waist of the scaphoid. J Bone Joint Surg Br. 1989;71(2):307-10.
4. Larson AN, Bishop AT, Shin AY. Dorsal distal radius vascularized pedicled bone grafts for scaphoid nonunions. Tech Hand Up Extrem Surg. 2006;10(4):212-23.
5. Pinder RM, Brkljac M, Rix L. Treatment of scaphoid nonunion: A systematic review of the existing evidence. J Hand Surg Am. 2015;40(9):1797-805.e3.
6. Zhang H, Gu J, Liu H. Pedicled vascularized versus non‐vascularized bone grafts in the treatment of scaphoid non‐union: A meta‐analysis of comparative studies. ANZ J Surg. 2021;91(11) E682-9.
7. Ross PR, Lan WC, Chen JS. Revision surgery after vascularized or non-vascularized scaphoid nonunion repair: a national population study. Injury. 2020;51(3):656-62.
8. Baamir A, Dhellemmes O, Coquerel-Beghin D. Graft choice for managing scaphoid non-union: umbrella review. Hand Surg Rehabil. 2024;43(4):101759.
9. Amadio PC, Berquist TH, Smith DK. Scaphoid malunion. J Hand Surg Am. 1989;14(4):679-87.
10. Hudak PL, Amadio PC, Bombardier C. Development of an upper extremity outcome measure: the DASH (disabilities of the arm, shoulder and hand) [corrected]. The upper extremity collaborative group (UECG). Am J Ind Med. 1996;29(6):602-8.
11. Hirche C, Xiong L, Heffinger C, et al. Vascularized versus non-vascularized bone grafts in the treatment of scaphoid non-union. J Orthop Surg (Hong Kong). 2017;25(1):2309499016684291.
12. Aibinder WR, Wagner ER, Bishop AT. Bone grafting for scaphoid nonunions: is free vascularized bone grafting superior for scaphoid nonunion? Hand (N Y). 2019;14(2):217-22.
13. Matić S, Vučković Č, Lešić A, et al. Pedicled vascularized bone grafts compared with xenografts in the treatment of scaphoid nonunion. Int Orthop. 2021;45(4):1017-23.
14. Delamarre M, Leroy M, Barbarin M. Long-term clinical and radiological results after scaphoid non-union treatment: a retrospective study about 60 cases. Eur J Orthop Surg Traumatol. 2024;34(1):507-15.
15. Kawamura K, Chung KC. Treatment of scaphoid fractures and nonunions. J Hand Surg Am. 2008;33(6):988-97.
Download attachments: 10.4328.ACAM.22799
Mustafa Altıntaş, Okan Ateş, Ali Özdemir. Comparison of surgical outcomes of patients with vascularized or nonvascularized grafting in scaphoid nonunion surgery. Ann Clin Anal Med 2025;16(9):632-636
Citations in Google Scholar: Google Scholar
This work is licensed under a Creative Commons Attribution 4.0 International License. To view a copy of the license, visit https://creativecommons.org/licenses/by-nc/4.0/
Efficacy and safety of 10F percutaneous catheter versus 28F chest tube in pneumothorax: A retrospective comparative study
Ismail Dal
Department of Thoracic Surgery, Faculty of Medicine, Kastamonu University, Kastamonu, Turkey
DOI: 10.4328/ACAM.22800 Received: 2025-07-03 Accepted: 2025-08-04 Published Online: 2025-08-12 Printed: 2025-09-01 Ann Clin Anal Med 2025;16(9):637-641
Corresponding Author: Ismail Dal, Department of Thoracic Surgery, Faculty of Medicine, Kastamonu University, Kastamonu, Turkey. E-mail: idal@kastamonu.edu.tr P: +90 366 214 10 53 Corresponding Author ORCID ID: https://orcid.org/0000-0002-5118-0780
This study was approved by the Ethics Committee of Kastamonu University (Date: 2024-07-02, No: 2024-KAEK-24)
Aim: Percutaneous catheter drainage has emerged as a less invasive alternative to traditional large-bore chest tubes in the management of pneumothorax. This study aimed to compare the clinical outcomes of 10F percutaneous catheters with 28F chest tubes.
Materials and Methods: A retrospective analysis was conducted on 78 patients treated for pneumothorax between May 2023 and May 2025. Patients were divided into two groups: those treated with a 10F percutaneous catheter (n=40) and those treated with a 28F chest tube (n=38). Demographics, pneumothorax etiology, length of hospital stay, drainage duration, prolonged air leak, and need for video-assisted thoracoscopic surgery (VATS) were evaluated. Statistical analyses included Mann–Whitney U, Fisher’s exact, and Chi-square tests with Monte Carlo simulation where appropriate.
Results: There were no significant differences between groups in age (p=0.1596), sex (p=0.1406), pneumothorax side (p=0.4383), or COPD prevalence (p=0.9382). Etiological distribution showed a statistically significant difference (p=0.0051); notably, 10F catheters were not utilized in cases of penetrating trauma. Mean hospital stay and drainage duration were not significantly different between groups (p=0.0656 and p=0.2709, respectively). The rates of prolonged air leak (p=0.3493) and VATS requirement (p=0.9495) were also similar.
Discussion: The 10F percutaneous catheter demonstrated comparable efficacy and safety to the conventional 28F chest tube in the management of pneumothorax. Despite being less invasive, it yielded similar clinical outcomes, supporting its use as a safe and effective alternative for treating spontaneous pneumothorax and blunt trauma cases.
Keywords: pneumothorax, catheterization, chest tubes, drainage, air leak
Introduction
Pneumothorax, defined as the presence of air in the pleural space, can compromise respiratory function and lead to potentially life-threatening consequences if not managed promptly and effectively [1]. It may occur spontaneously (primary or secondary), traumatically, or iatrogenically, and its clinical presentation varies from asymptomatic to severe respiratory distress depending on the underlying etiology and patient comorbidities [2,3]. The primary goal of treatment is to evacuate intrapleural air, allow lung re-expansion, and prevent recurrence [4].
Traditionally, large-bore chest tubes (e.g., 28F–32F (French)) inserted via thoracostomy have been the standard of care for moderate to large pneumothoraces, especially in traumatic or secondary cases [5]. However, these tubes are associated with significant patient discomfort, longer hospital stays, and complications such as infection, bleeding, and injury to surrounding structures [6]. In recent years, minimally invasive techniques using small-bore catheters, typically inserted percutaneously under imaging guidance, have gained popularity due to their reduced procedural trauma and improved patient comfort [7-9].
Several studies have demonstrated that small-bore catheters (≤14F), including pigtail catheters and percutaneous drainage tubes, can achieve similar clinical outcomes to large-bore chest tubes in the treatment of spontaneous or iatrogenic pneumothorax [10–12]. In particular, 10F percutaneous catheters have been increasingly adopted as an initial management strategy, especially in cases not related to major trauma [13,14]. The American College of Chest Physicians and British Thoracic Society both acknowledge small-bore catheters as appropriate for stable patients with spontaneous pneumothorax, although evidence remains heterogeneous [1, 14].
Despite growing interest in percutaneous catheters, there remains some reluctance among clinicians to favor them over conventional chest tubes, especially in more complex or traumatic cases [15]. Concerns persist regarding prolonged air leaks, higher failure rates, and the potential need for escalation to surgical intervention such as video-assisted thoracoscopic surgery (VATS) [16]. Furthermore, comparative data on the efficacy and safety of small-bore percutaneous catheters versus standard large-bore chest tubes in mixed patient populations—including trauma cases—remain limited, particularly in real-world clinical settings.
This study aims to address this knowledge gap by retrospectively comparing the clinical outcomes of 10F percutaneous catheters and 28F chest tubes in patients treated for pneumothorax over a two-year period at a tertiary care center. By analyzing parameters such as drainage duration, length of hospital stay, complication rates, and need for surgical intervention, we aim to evaluate whether the less invasive 10F catheter represents a safe and effective alternative to traditional chest tubes across a diverse pneumothorax population.
Materials and Methods
Study Design and Setting
This retrospective comparative study was conducted at a tertiary care university hospital between May 2023 and May 2025. The study protocol was approved by the institutional ethics committee (Approval No: 2024-KAEK-24) and conducted in accordance with the Declaration of Helsinki. Informed consent was waived by the ethics committee due to the retrospective nature of the study.
Study Population
Patients aged ≥18 years who were diagnosed with pneumothorax and underwent interventional drainage were included. Both isolated pneumothorax and hemopneumothorax cases were eligible for inclusion. Patients were divided into two groups based on the initial intervention: the 10F percutaneous catheter group and the 28F chest tube group. The choice of drainage method was at the discretion of the attending physician, based on clinical status and radiological findings.
Patients with tension pneumothorax, hemopneumothorax, and various etiologies—including spontaneous, traumatic (blunt and penetrating), and iatrogenic pneumothorax—were eligible for inclusion. Clinical decision-making was based on patient stability and radiological findings. Clinically unstable patients were treated with immediate chest tube thoracostomy. In stable patients, 10F percutaneous catheters were preferred when imaging showed >2 cm separation from the lateral chest wall to allow safe percutaneous access. If catheter placement failed or was technically not feasible, conversion to chest tube drainage was performed. In cases with <2 cm separation, chest tube placement was preferred. Additionally, in all cases of penetrating trauma, large-bore chest tubes were used in accordance with standard trauma care protocols.
Exclusion Criteria
Patients were excluded if they [1] had incomplete medical records, [2] were non-compliant with treatment (e.g., self-discharge or refusal of procedure), or [3] experienced a failed catheter or tube insertion requiring immediate conversion.
Operator and Standardization
All procedures, including both catheter and chest tube placements, as well as any surgical interventions, were performed by a single experienced thoracic surgeon, ensuring consistency in technique and clinical decision-making throughout the study period.
Diagnostic and Procedural Approach
All diagnoses were confirmed via thoracic computed tomography (CT). In patients considered for catheter placement, a wireless Clarius ultrasound device (Clarius Mobile Health Corp., Vancouver, Canada) was used to confirm the presence and extent of pneumothorax and to assess thoracic anatomy in order to determine suitability for catheter thoracostomy (Figure 1).
In the 10F catheter group, procedures were performed under real-time ultrasonography (USG) guidance at the 4th or 5th intercostal space along the mid-axillary line, within the “safe triangle”. Chest tubes (28F) were placed via standard blunt dissection using the same anatomical landmarks. All procedures were carried out under sterile conditions. A post-procedural chest radiograph was obtained to confirm the correct positioning of the catheter and to assess lung re-expansion (Figure 2).
Surgical Intervention
Patients who did not respond to drainage or developed complications were evaluated for video-assisted thoracoscopic surgery (VATS). Surgical indications were based on standard clinical criteria including persistent air leak, incomplete lung expansion, or recurrence, at the discretion of the surgeon.
Data Collection
Demographic and clinical variables recorded included age, sex, comorbidities (especially chronic obstructive pulmonary disease (COPD)), etiology (spontaneous, traumatic, iatrogenic), side of pneumothorax, length of hospital stay, drainage duration, presence of prolonged air leak (defined as air leak >7 days), and need for VATS. Data were collected from electronic medical records by two independent researchers and verified for consistency.
Sample Size Calculation
Using G*Power (version 3.1; Heinrich Heine University, Düsseldorf, Germany), a sample size of 72 patients (36 per group) was calculated to detect a moderate effect size (d = 0.6) in hospital stay with α = 0.05 and power = 0.80. The final study population consisted of 78 patients (40 in the 10F group and 38 in the 28F group), satisfying the power requirement.
Statistical Analysis
Statistical analyses were performed using IBM SPSS Statistics for Windows, version 26.0 (IBM Corp., Armonk, NY, USA). The Shapiro–Wilk test was used to assess normality of continuous variables. Non-normally distributed continuous data were compared using the Mann–Whitney U test. Categorical variables were analyzed using the Chi-square test or Fisher’s exact test, with Monte Carlo simulation applied where appropriate. A two-tailed p-value <0.05 was considered statistically significant.
Ethical Approval
This study was approved by the Ethics Committee of Kastamonu University Clinical Ethics Committee (Date: 2024-07-02, No: 2024-KAEK-24).
Results
A total of 91 patients were assessed for eligibility during the study period. Thirteen patients were excluded for the following reasons: five patients died due to non-pneumothorax-related causes before treatment could be completed, three were referred to external centers during follow-up, two patients were initially managed with percutaneous catheter insertion but were switched to chest tube drainage due to massive air leak and failure of lung re-expansion, two had incomplete medical records, and one patient was non-compliant and intentionally removed the catheter. In two patients, although a percutaneous catheter was initially planned, the procedure could not be successfully completed due to technical failure in advancing the catheter. As a result, these patients were managed with standard chest tube thoracostomy and included in the 28F group for outcome analysis. Ultimately, 78 patients were included in the final analysis: 40 in the 10F catheter group and 38 in the 28F chest tube group (Figure 3). There were no statistically significant differences between the two groups in terms of mean age (10F: 43.9 ± 22.7 years vs. 28F: 36.0 ± 17.4 years; p = 0.1596) or sex distribution (male: 75.0% in 10F vs. 89.5% in 28F; p = 0.1406). The side of pneumothorax (right-sided: 60.0% in 10F vs. 68.4% in 28F) did not significantly differ between groups (p = 0.4383), nor did the presence of COPD (10.0% vs. 10.5%; p = 0.9382) (Table 1). The etiology of pneumothorax showed a significant difference between the two groups (p = 0.0051). The 10F group included more patients with primary spontaneous pneumothorax (PSP) (65.0% vs. 47.4%) and blunt trauma (22.5% vs. 15.8%), while penetrating trauma was observed exclusively in the 28F group (0% vs. 21.1%) (Table 1).
The mean hospital stay was 7.4 ± 6.7 days in the 10F group and 4.6 ± 2.3 days in the 28F group (p = 0.0656). The mean drainage duration was 5.8 ± 4.4 days in the 10F group and 4.4 ± 2.1 days in the 28F group (p = 0.2709). Although both durations were longer in the 10F group, these differences were not statistically significant (Table 2).
In terms of outcomes, prolonged air leak (defined as >7 days) was observed in 8 patients (20.0%) in the 10F group and 4 patients (10.5%) in the 28F group (p = 0.3493). The number of patients requiring video-assisted thoracoscopic surgery (VATS) was equal in both groups (n=3, p = 0.9495) (Table 2).
No additional complications were observed in the 10F percutaneous catheter group other than prolonged air leak. However, in two patients, the procedure was unsuccessful due to technical difficulties in advancing the catheter, and these patients were subsequently treated with standard chest tube thoracostomy. Additionally, one patient with tension pneumothorax was managed successfully with a 10F percutaneous catheter without the need for further intervention.
Overall, the two groups exhibited comparable clinical outcomes across all measured parameters. The use of a 10F percutaneous catheter appears to be a safe and effective alternative to large-bore chest tubes, particularly in spontaneous pneumothorax and blunt traumatic cases.
Discussion
This study compared the clinical outcomes of 10F percutaneous catheters and 28F chest tubes in the treatment of pneumothorax. The findings demonstrated that, despite being less invasive, the 10F catheter achieved clinical outcomes comparable to those of the traditional large-bore chest tube, particularly in cases of spontaneous pneumothorax and blunt trauma. These results are consistent with previous studies suggesting that small-bore drainage systems can be a safe and effective alternative in selected patient populations [17,18].
The use of small-bore catheters has gained traction due to advantages such as reduced pain, shorter procedure times, and fewer complications related to insertion trauma [19,20]. In our study, although hospital stay and drainage duration were slightly longer in the 10F group, the differences were not statistically significant. This aligns with findings from previous trials that showed no major difference in clinical outcomes between small- and large-bore drainage devices in terms of air leak resolution and hospital course.
Etiologically, our study observed a significant difference between groups, primarily because penetrating trauma was exclusively managed with large-bore chest tubes. This reflects standard clinical practice, as penetrating injuries carry a higher risk of massive air leak or hemothorax, which may not be adequately managed with smaller devices [21]. However, in cases of primary spontaneous pneumothorax (PSP) and blunt trauma, the 10F catheter performed safely and effectively, consistent with reports recommending its use in stable, non-traumatic or closed trauma scenarios [22].
The rate of prolonged air leak was similar in both groups. Although small-bore catheters are sometimes associated with higher recurrence or leak persistence, our findings suggest no clinically meaningful disadvantage in this regard, especially when patient selection is appropriate. The equal need for VATS in both groups further supports the non-inferiority of percutaneous catheter use in uncomplicated cases.
Another strength of our study is that all interventions were performed by a single thoracic surgeon, ensuring procedural consistency. Additionally, thoracic CT and ultrasound guidance were used systematically for diagnosis and catheter placement, enhancing safety and standardization.
Our results support growing evidence that challenges the dogma favoring large-bore chest tubes in all pneumothorax cases. Although randomized controlled trials are limited, numerous retrospective and prospective cohort studies report comparable efficacy between small- and large-bore devices when used under appropriate clinical indications [23].
In our study, no additional complications were observed in patients treated with the percutaneous catheter apart from prolonged air leak. However, in two cases, the procedure was unsuccessful due to technical difficulties in advancing the catheter, necessitating conversion to standard chest tube drainage. This highlights that percutaneous techniques may not always be feasible, emphasizing the importance of anatomical suitability and operator experience in patient selection.
The slightly longer hospital stay and drainage duration observed in the 10F group may be attributed to two main factors. First, because air leaks could not be visually monitored through the 10F catheter, a more conservative discharge approach was adopted in some cases. Second, due to the percutaneous nature of catheter insertion, we preferred this method in patients with more than 2 cm of lung separation from the lateral chest wall to ensure safe placement. Consequently, patients with relatively larger pneumothoraces were more frequently included in the 10F group. These factors may have contributed to the observed differences, although they did not reach statistical significance, as confirmed by the study’s power analysis.
In our setting, the cost of a 28F chest tube with a closed underwater drainage system is approximately 2 USD, compared to 9 USD for a 10F pleural catheter set. While the catheter is more expensive upfront, it may offer advantages in patient comfort and ease of use. However, the inability to visually monitor air leaks led to a more cautious discharge approach in some cases, potentially increasing hospital stay and related costs. These factors highlight the importance of future studies evaluating the overall cost-effectiveness and patient-centered outcomes of small-bore catheter use.
Finally, a number of recent analyses report outcomes similar to ours. For example, Chang et al. [13] observed comparable rates of lung re‑expansion, complications, and hospital stay between small‑bore pigtail catheters and large‑bore chest tubes in various pneumothorax settings. In addition, Lyons et al. [21] found in a multicenter cohort of traumatic pneumothorax cases that 8–10F catheters and 24–28F tubes demonstrated similar efficacy and conversion rates. These reports suggest that, under appropriate patient selection, small‑bore catheters may offer outcomes broadly equivalent to those of traditional chest tubes and support their consideration as a less invasive alternative in suitable patients.
Conclusion
In conclusion, the 10F percutaneous catheter represents a safe and effective alternative to the conventional 28F chest tube in the treatment of pneumothorax, particularly in patients with spontaneous pneumothorax and blunt trauma. Its less invasive approach, combined with comparable clinical outcomes, supports its wider use in carefully selected cases.
Limitation
This study has limitations. First, its retrospective nature may introduce selection bias. Second, the sample size, although adequately powered, was modest. Third, long-term recurrence was not evaluated. Future prospective, multicenter studies with larger populations and long-term follow-up are needed to confirm these findings and better delineate the optimal indications for percutaneous catheter use in pneumothorax.
Scientific Responsibility Statement
The authors declare that they are responsible for the article’s scientific content including study design, data collection, analysis and interpretation, writing, some of the main line, or all of the preparation and scientific review of the contents and approval of the final version of the article.
Animal and Human Rights Statement
All procedures performed in this study were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki Declaration and its later amendments or compareable ethical standards.
Funding: None
Conflict of Interest
The authors declare that there is no conflict of interest.
References
1. DeMaio A, Semaan R. Management of pneumothorax. In: Feller-Kopman D, Maldonado F, editors. Pleural disease. Philadelphia: Elsevier;2021.p.729-738.
2. Baumann MH, Strange C, Heffner JE, et al. Management of spontaneous pneumothorax: an American College of Chest Physicians Delphi consensus statement. Chest. 2001;119(2):590-602.
3. MacDuff A, Arnold A, Harvey J. BTS Pleural Disease Guideline Group. Management of spontaneous pneumothorax: British Thoracic Society Pleural Disease Guideline 2010. Thorax. 2010;65(Suppl 2):ii18-31.
4. Tülüce K, Türüt H. Management of primary spontaneous pneumothorax: Our single-center, five-year experience. Turk Gogus Kalp Damar Cerrahisi Derg. 2022;30(1):75-82.
5. Etoch SW, Bar-Natan MF, Miller FB. Tube thoracostomy. Factors related to complications. Arch Surg. 1995;130(5):521-5.
6. Viftrup A, Laussten S. Pain during and after insertion of chest drains. Nord Sygeplejeforsk. 2020;10(2):127-38.
7. Horsley A, Jones L, White J. Efficacy and complications of small-bore, wire-guided chest drains. Chest. 2006;130(6):1857-63.
8. Wilson PM, Rymeski B, Xu X. An evidence-based review of primary spontaneous pneumothorax in the adolescent population. J Am Coll Emerg Physicians Open. 2021;2(3):e12449.
9. Rasihashemi SZ, Ramouz A, Amini H. Comparison of the therapeutic effects of a pigtail catheter and chest tube in the treatment of spontaneous pneumothorax: a randomized clinical trial study. Turk Thorac J. 2021;22(6):459-65.
10. Contou D, Razazi K, Katsahian S, et al. Small-bore catheter versus chest tube drainage for pneumothorax. Am J Emerg Med. 2012;30(8):1407-13.
11. Korczyński P, Górska K, Nasiłowski J. Comparison of small bore catheter aspiration and chest tube drainage in the management of spontaneous pneumothorax. Adv Exp Med Biol. 2015;866:15-23.
12. Lin YC, Tu CY, Liang SJ, et al. Pigtail catheter for the management of pneumothorax in mechanically ventilated patients. Am J Emerg Med. 2010;28(4):466-71.
13. Chang SH, Kang YN, Chiu HY. A systematic review and meta-analysis comparing pigtail catheter and chest tube as the initial treatment for pneumothorax. Chest. 2018;153(5):1201-12.
14. Roberts ME, Rahman NM, Maskell NA, et al. British Thoracic Society Guideline for pleural disease. Thorax. 2023;78(11):1143-56.
15. Kwiatt M, Tarbox A, Seamon MJ, et al. Thoracostomy tubes: a comprehensive review of complications and related topics. Int J Crit Illn Inj Sci. 2014;4(2):143-55.
16. Dugan KC, Laxmanan B, Murgu S. Management of persistent air leaks. Chest. 2017;152(2):417-23.
17. Mehra S, Heraganahally S, Sajkov D. The effectiveness of small-bore intercostal catheters versus large-bore chest tubes in the management of pleural disease: a systematic review. Lung India. 2020;37(3):198-203.
18. Ramírez-Giraldo C, Rey-Chaves CE, Rodriguez Lima DR. Management of pneumothorax with 8.3-French pigtail catheter: description of the ultrasound-guided technique and case series. Ultrasound J. 2023;15(1):1.
19. Anderson D, Chen SA, Godoy LA. Comprehensive review of chest tube management: a review. JAMA Surg. 2022;157(3):269-74.
20. Boccatonda A, Tallarico V, Venerato S. Ultrasound-guided small-bore chest drain placement: a retrospective analysis of feasibility, safety and clinical implications in internal medicine ward. J Ultrasound. 2025;28(2):389-96.
21. Lyons NB, Abdelhamid MO, Collie BL, et al. Small versus large-bore thoracostomy for traumatic hemothorax: a systematic review and meta-analysis. J Trauma Acute Care Surg. 2024;97(4):631-8.
22. Le KDR, Wang AJ, Sadik K. Pigtail catheter compared to formal intercostal catheter for the management of isolated traumatic pneumothorax: a systematic review and meta-analysis. Complications. 2024;1(3):68-78.
23. Orlando A, Cordero J, Carrick MM, et al. Comparing complications of small-bore chest tubes to large-bore chest tubes in the setting of delayed hemothorax: a retrospective multicenter cohort study. Scand J Trauma Resusc Emerg Med. 2020;28(1):56.
Download attachments: 10.4328.ACAM.22800
Ismail Dal. Efficacy and safety of 10F percutaneous catheter versus 28F chest tube in pneumothorax: A retrospective comparative study. Ann Clin Anal Med 2025;16(9):637-641
Citations in Google Scholar: Google Scholar
This work is licensed under a Creative Commons Attribution 4.0 International License. To view a copy of the license, visit https://creativecommons.org/licenses/by-nc/4.0/
The Role of NLR, PLR, and hematologic parameters in the differential diagnosis of pediatric acute scrotum
Mustafa Tuşat
Department of Pediatric Surgery, Faculty of Medicine, Aksaray University, Aksaray, Turkey
DOI: 10.4328/ACAM.22807 Received: 2025-07-10 Accepted: 2025-08-11 Published Online: 2025-08-16 Printed: 2025-09-01 Ann Clin Anal Med 2025;16(9):642-645
Corresponding Author: Mustafa Tuşat, Department of Pediatric Surgery, Faculty of Medicine, Aksaray University, Aksaray, Turkey. E-mail: mustafatusat42@hotmail.com P: +90 382 502 20 25 Corresponding Author ORCID ID: https://orcid.org/0000-0003-2327-4250
This study was approved by the Ethics Committee of Aksaray University Health Sciences Scientific Research (Date: 2025-06-05, No:2025/103)
Aim: The primary causes of acute scrotum are testicular torsion (TT) and epididymo-orchitis (EO), and the rapid and accurate differential diagnosis of these two conditions is crucial for preserving testicular function. The objective of this research is to assess the clinical value of the neutrophil-to-lymphocyte ratio (NLR), platelet-to-lymphocyte ratio (PLR), and hematological parameters in the differential diagnosis of TT and EO in pediatric patients.
Materials and Methods: Ninety-nine children who presented with scrotal pain and were diagnosed with either TT or EO in the pediatric emergency department of Aksaray University Training and Research Hospital between January 1, 2019, and May 25, 2025, were included in the study.
Results: In the comparison of the groups, NLR and neutrophil counts were found to be meaningfully higher in children with TT (p = 0.002). For the prediction of TT, the cut-off value for NLR was determined as >3.53 with a sensitivity of 52.5% and specificity of 78% (AUC: 0.684 [0.580–0.788]), and for neutrophil count, the cut-off was >8915/mm³ with a sensitivity of 30.0% and specificity of 88.1% (AUC: 0.681 [0.577–0.785]).
Discussion: NLR and neutrophil count may serve as useful markers in distinguishing TT from EO, particularly in pediatric patients where physical examination and imaging findings are inconclusive.
Keywords: Testicular torsion, epididymo-orchitis, children, neutrophil-to-lymphocyte ratio, neutrophil count
Introduction
Acute scrotum is a clinical condition in childhood that typically requires urgent intervention and is characterized by sudden-onset scrotal pain, swelling, and tenderness [1]. Among children presenting with these symptoms, the most common etiologies are testicular torsion (TT) and epididymo-orchitis (EO) [2]. Rapid and accurate differentiation of these conditions is particularly critical in the case of testicular torsion, as any delay or misdiagnosis may result in testicular loss [3].
Diagnosis of TT and EO involves physical examination, laboratory testing, and imaging methods such as scrotal Doppler ultrasonography [4]. In certain cases, overlapping or inconclusive clinical and radiological findings may complicate the differential diagnosis, particularly in distinguishing early-stage testicular torsion from epididymo-orchitis [5]. As a result, there is a growing need to identify additional parameters that could support the diagnostic process.
In recent years, hematological parameters and derived markers recognized as indicators of inflammation have gained significance in the diagnosis and prognosis of numerous acute and chronic diseases [6,7]. Markers derived from complete blood counts parameters, such as the neutrophil-to-lymphocyte ratio (NLR) and platelet-to-lymphocyte ratio (PLR), are inexpensive and easily calculable indicators thought to reflect systemic inflammation [7-10]. Since neutrophil levels typically increase and lymphocyte levels decrease during acute inflammatory processes, NLR levels are generally elevated in such cases [7,11]. Similarly, the inflammation-related rise in platelet counts and changes in lymphocyte levels make PLR a potential marker of inflammation as well, [7,12].
Although various studies in the adult population have evaluated the diagnostic value of these biomarkers in acute scrotal pathologies, research on the role of NLR and PLR in differentiating testicular torsion from epididymo-orchitis in children remains limited [13,14].
This study aims to compare TT and EO cases in pediatric patients presenting with acute scrotum in terms of their NLR and PLR values and to assess the potential diagnostic utility of these hematologic markers. The ultimate goal is to contribute to early diagnosis and the establishment of appropriate management strategies.
Material and Methods
Study Design
The study utilizes a retrospective methodology. A total of 99 children under the age of 18 who presented with scrotal pain to the pediatric emergency department of Aksaray University Training and Research Hospital between January 1, 2019, and May 25, 2025, and were diagnosed with TT or EO, were included. Patients with chronic diseases, EO cases accompanied by appendix testis torsion, incomplete laboratory data, coexisting acute infections, or who were receiving antibiotic therapy were excluded.
Data Collection
Clinical data were accessed via the institution’s electronic medical record system. Children diagnosed with TT via Doppler ultrasonography were classified as Group 1, and those diagnosed with EO were classified as Group 2. For each patient, age at admission and routine hematologic markers, including leukocyte, neutrophil, lymphocyte, monocyte, and platelet counts, were recorded. NLR (neutrophil/lymphocyte ratio) and PLR (platelet/lymphocyte ratio) values were also calculated.
Statistical Analysis
Statistical analyses were performed using IBM SPSS Statistics for Windows, Version 23.0 (IBM Corp., Armonk, NY, USA). The Kolmogorov-Smirnov test was used to assess the distribution of numerical variables, and skewness and kurtosis coefficients were examined to support assumptions of normality. The Mann-Whitney U test, a non-parametric test, was used to compare two independent groups. Sensitivity and specificity values were evaluated using receiver operating characteristic (ROC) curve analysis. Descriptive statistics were reported as median (25–75 interquartile range, IQR) for non-normally distributed continuous variables. A p-value of less than 0.05 was considered statistically significant in all analyses.
Ethical Approval
This study was approved by the Ethics Committee of Aksaray University Health Sciences Scientific Research Ethics Committee (Date: 2025-06-05, No:2025/103).
Results
Data from 99 patients were included in the analysis, with 40 children diagnosed with TT in Group 1 and 59 with EO in Group 2. The median (25–75 IQR) age at admission was 13.0 (12.0–15.0) years in Group 1 and 12.0 (9.0–15.0) years in Group 2, with no meaningful difference with statistical significance between the groups (p = 0.766) (Table 1). The median (25–75 IQR) leukocyte and lymphocyte counts were 10,575.0 (8,970.0–13,900.0)/mm³ and 2,010.0 (1,607.5–2,372.5)/mm³ in Group 1, and 9,660.0 (7,880.0–11,890.0)/mm³ and 2,330.0 (2,020.0–2,750.0)/mm³ in Group 2, respectively. No appreciable disparity was found between the two groups in terms of leukocyte and lymphocyte counts (p = 0.091 and p = 0.052, respectively). The median (25–75 IQR) neutrophil count was 7,795 (5,515–9,660)/mm³ in Group 1 and 5,920 (3,870–8,140)/mm³ in Group 2. Comparison between groups revealed a statistically significant difference in neutrophil counts (p = 0.002). The median (25–75 IQR) monocyte and platelet counts were 560.0 (395.0–710.0)/mm³ and 291,500 (256,000–315,750)/mm³ in Group 1, and 550.0 (450.0–680.0)/mm³ and 316,000 (267,000–346,000)/mm³ in Group 2. No statistically Notable differences were found between groups in terms of monocyte and platelet counts (p = 0.371 and p = 0.092, respectively) (Table 1).
The overall median (25–75 IQR) NLR value was 2.92 (1.93–4.55). It was 3.58 (2.59–5.66) in Group 1 and 2.36 (1.68–3.46) in Group 2. A statistically meaningful distinction in NLR levels was found between the groups (p = 0.002). The overall median (25–75 IQR) PLR was 136.8 (102.0–180.1), and in Group 1, it was 134.8 (103.1–197.2), with no statistically notable difference observed between the groups in terms of PLR (p = 0.653) (Table 1).
In the ROC analysis, the cut-off value for NLR in predicting TT was determined as >3.53, with a sensitivity of 52.5% and a specificity of 78%. The corresponding AUC was 0.684 (95% CI: 0.580–0.788). For neutrophil count, the optimal cut-off value was >8915/mm³, with a sensitivity of 30.0% and a specificity of 88.1%. The AUC for neutrophil count was 0.681 (95% CI: 0.577–0.785) (Table 2) (Figure 1).
Discussion
In our study, we found that both NLR and neutrophil counts were considerably higher in children diagnosed with TT compared to those with EO (p = 0.002), while PLR and other hematological parameters showed no pronounced difference between the groups. In predicting TT, the NLR cut-off value was identified as >3.53 with 52.5% sensitivity and 78% specificity (AUC: 0.684 [0.580–0.788]), whereas the neutrophil count cut-off was >8915/mm³ with 30.0% sensitivity and 88.1% specificity (AUC: 0.681 [0.577–0.785]).
The two most common causes of acute scrotal pain are TT and EO [2]. These conditions differ significantly in terms of treatment approaches and prognosis. While EO is typically managed with antibiotic therapy and supportive care, TT requires urgent surgical intervention [1]. Therefore, timely diagnosis and the initiation of appropriate treatment are critical for patients presenting with acute scrotal pain. However, the diagnostic process for acute scrotal pathologies such as TT and EO may be hindered by nonspecific symptoms, inconsistencies between clinical and radiological findings, or inconclusive imaging results, potentially leading to delays in diagnosis and increased morbidity, particularly in time-sensitive conditions like TT [15,16]. Accordingly, various recent studies have investigated the diagnostic value of supportive biomarkers for the rapid and accurate differentiation between TT and EO [17,18].
Considering that TT and EO are inflammatory conditions, we examined the potential diagnostic value of NLR and PLR—markers reflecting systemic inflammation—in distinguishing between these two common causes of acute scrotum. Several studies in the literature have reported differing results in this area.
In one study involving 75 pediatric patients (37 with EO and 38 with TT), NLR levels were found to be substantially higher in the TT group, while no meaningful difference in PLR levels was observed. The predictive NLR cut-off for TT was reported as >2.8 (AUC: 0.738, sensitivity: 56%, specificity: 84%), [13]. Another study involving 30 children with TT and 30 with EO also found significantly elevated NLR levels in the TT group, with no difference in PLR, and reported a cut-off value of >2.92 for NLR in predicting TT (AUC: 0.790, sensitivity: 51%, specificity: 87%) [14]. In a larger study with 187 EO and 71 TT cases, NLR was again meaningfully higher in the TT group, whereas PLR did not differ. The reported NLR cut-off for TT in this study was >3.39 [19]. A study of 149 adult patients with TT and EO found notably higher NLR and PLR levels in TT cases. The NLR cut-off for TT was >3.91 (AUC: 0.628, sensitivity: 62%, specificity: 62.9%), [20]. In our study, we similarly observed considerably higher NLR values in the TT group compared to the EO group (p = 0.002), with no meaningful distinction in PLR. These results are consistent with previous studies examining the role of NLR and PLR in distinguishing TT from EO in pediatric populations.
In the ROC analysis conducted in our study, we identified an NLR cut-off value of >3.53 (AUC: 0.684, sensitivity: 52%, specificity: 78%) for predicting TT. This cut-off value is higher than those reported by Sağır et al. and Arslan et al., whose studies also focused on pediatric cases. This discrepancy may be attributed to differences in inclusion criteria: both studies excluded patients presenting more than 24 hours after symptom onset, whereas our sample size was larger and more inclusive in terms of presentation time.
Systemic inflammation, as well as stress conditions such as infection or ischemia, typically leads to an increase in neutrophils and platelets, which actively participate in the regulation of inflammatory responses, and a decrease in lymphocytes [7,21,22]. In a retrospective study evaluating hematological parameters in 40 adolescents with TT and 54 with EO, neutrophil counts were found to be a notable degree, elevated in the TT group, while lymphocyte and platelet counts were markedly higher in the EO group [23]. Similarly, in another study including a total of 250 TT and EO cases under the age of 25, neutrophil counts were reported to be considerably higher in the TT group, while no meaningful differences were found between the groups in terms of leukocyte, platelet, lymphocyte, and monocyte counts. In the same study, the cut-off value for neutrophil count in predicting TT was reported as 5098/mm³ (AUC: 0.682, sensitivity: 70.1%, specificity: 64.7%), [24]. In an adult study that included a healthy control group alongside TT and EO cases, no significant differences were observed between the TT and EO groups in leukocyte, neutrophil, lymphocyte, monocyte, or platelet counts. For TT diagnosis, the neutrophil count cut-off was reported as >7437/mm³ (AUC: 0.686, sensitivity: 62.5%, specificity: 75.6%), [25]. Another pediatric study found that leukocyte and neutrophil levels were significantly higher in the TT group, but no significant differences were noted in platelet and lymphocyte counts [13]. Conversely, in a separate pediatric study, no pronounced difference was detected between the TT and EO groups regarding leukocyte and neutrophil counts [14]. In our study as well, among the hematological parameters, only neutrophil counts were found to be considerably higher in the TT group, and we determined the cut-off value for neutrophil count in predicting TT as >8915/mm³. The variations in the cut-off values reported across different studies, including ours, may stem from differences in patient age groups, inclusion criteria, and symptom duration.
Limitation
The study has limitations as it was retrospective and single-center. Data gaps and risk of selection bias may affect the results. The generalizability of the findings is limited and should be supported by further studies. Another limitation of the study is that the physical examination and ultrasonography required for the diagnosis process were performed by different physicians and radiologists. In addition, the fact that the parameters could not be compared with acute phase reactants such as C-reactive protein and the difference between the onset of symptoms and hospital admission may have affected the levels of inflammatory markers, which can be considered as the main limitations of the study.
Conclusion
The findings of our study suggest that NLR and neutrophil count may serve as useful biomarkers in differentiating TT from EO, particularly in cases where clinical distinction is challenging. These parameters, being both rapid and easily accessible, may provide supportive diagnostic value, especially in situations where physical examination and imaging findings are inconclusive.
Scientific Responsibility Statement
The authors declare that they are responsible for the article’s scientific content including study design, data collection, analysis and interpretation, writing, some of the main line, or all of the preparation and scientific review of the contents and approval of the final version of the article.
Animal and Human Rights Statement
All procedures performed in this study were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki Declaration and its later amendments or compareable ethical standards.
Funding: None
Conflict of Interest
The authors declare that there is no conflict of interest.
References
1. Chanchlani R, Acharya H. Acute scrotum in children: a retrospective study of cases with review of literature. Cureus. 2023;15(3):e36259.
2. Lee LK, Monuteaux MC, Hudgins JD, et al. Variation in the evaluation of testicular conditions across United States pediatric emergency departments. Am J Emerg. 2018;36(2):208-12.
3. Ogbetere F. Testicular torsion: Losses from missed diagnosis and delayed referral despite early presentation. Niger J Clin Pract. 2021;24(5):786-8.
4. Doğan M. Akut skrotal ağrı [Acute scrotal pain]. In: Taşar M, ed. Çocuk acil kliniğinde sık görülen başvuru semptomlarına yaklaşım [Approach to common presenting symptoms in pediatric emergency department]. Ankara: Türkiye Klinikleri; 2022:p.91-5.
5. Cokkinos DD, Partovi S, Rafailidis V, et al. Role and added value of contrast enhanced ultrasound of the painful scrotum in the emergency setting. J Ultrasound. 2023;26(2):563-75.
6. Islam MM, Satici MO, Eroglu SE. Unraveling the clinical significance and prognostic value of the neutrophil-to-lymphocyte ratio, platelet-to-lymphocyte ratio, systemic immune-inflammation index, systemic inflammation response index, and delta neutrophil index: An extensive literature review. Turk J Emerg Med. 2024;24(1):8-19.
7. Tuşat M, Memiş S. Evaluation of inflammatory markers and HALP score in childhood intussusceptions. Ann Clin Anal Med. 2025;16(7):530-4.
8. Cui P, Cheng T, Yan H. The value of nlr and plr in the diagnosis of rheumatoid arthritis combined with ınterstitial lung disease and assessment of treatment effect: a retrospective cohort study. Int J Gen Med. 2025;18:867-80.
9. Xu N, Zhang J-X, Zhang J-J, et al. The prognostic value of the neutrophil-to-lymphocyte ratio (NLR) and platelet-to-lymphocyte ratio (PLR) in colorectal cancer and colorectal anastomotic leakage patients: a retrospective study. BMC Surg. 2025;25(1):57.
10. Pan L, Xiao J, Fan L. The value of NLR, PLR, PCT, and DD levels in assessing the severity of hyperlipidemic acute pancreatitis. Front Med (Lausanne). 2025;12:1561255.
11. Zahorec R. Neutrophil-to-lymphocyte ratio, past, present and future perspectives. Bratisl Lek Listy. 2021;122(7):474-88.
12. Serebryanaya N, Shanin S, Fomicheva E. Blood platelets as activators and regulators of inflammatory and immune reactions. Part 1. Basic characteristics of platelets as inflammatory cells. Med Immunol (Russ). 2018;20(6):785-96.
13. Sağır S. Using NLR, PLR, and ALT* WBC Index (AWI) to differentiate between epididymo-orchitis and testicular torsion in children: A comparative study. Med Pharm J. 2023;2(3):167-74.
14. Arslan S, Azizoglu M, Kamci TO, et al. The value of hematological inflammatory parameters in the differential diagnosis of testicular torsion and epididymo-orchitis in children. Exp Biomed Res. 2023;6(3):212.
15. Bayne CE, Villanueva J, Davis TD. Factors associated with delayed presentation and misdiagnosis of testicular torsion: a case-control study. J Pediatr. 2017;186:200-4.
16. Vasconcelos-Castro S, Soares-Oliveira M. Abdominal pain in teenagers: beware of testicular torsion. J Pediatr Surg. 2020;55(9):1933-5.
17. Dias Filho AC, Rocha RS. Is it time to also use hematologic parameters for the diagnosis and prognosis of testicular torsion? Transl Androl Urol. 2022;11(10):1368.
18. Keseroğlu BB, Güngörer B. Predictive role of large unstained cells (LUC) and hematological data in the differential diagnosis of orchitis and testicular torsion. Mid Blac Sea J Health. 2021;7(1):97-103.
19. Çoşkun N, Metin M. Can the pan-immune inflammation value be used as a biomarker for differentiating epididymo-orchitis and testicular torsion in children? Medical Records. 2025;7(2):500-6.
20. Yığman M, Ekenci BY, Durak HM. Predictive value of hematologic parameters in the differential diagnosis of testicular torsion and epididymo-orchitis. Androl Bul. 2024;26(3):167-72.
21. Ural DA, Karakaya AE, Güler AG. Comparative analysis of the acute appendicitis management in children before and during the coronavirus disease-19 pandemic. KSU Med J. 2023;18(1):120-5.
22. Krzywinska E, Stockmann C. Hypoxia, metabolism and immune cell function. Biomedicines. 2018;6(2):56.
23. Çağlar U, Yıldız O, Ayrancı A. Predictive role of hematological parameters in testicular torsion and epididymo-orchitis in adolescents. Androl Bul. 2025;27(1):72-7.
24. Lee HY, Lim DG, Chung HS, et al. Mean platelet volume is the most valuable hematologic parameter in differentiating testicular torsion from epididymitis within the golden time. Transl Androl Urol. 2022;11(9):1282.
25. Benlioğlu C, Ali Ç. Comparison of hematological markers between testicular torsion and epididymo-orchitis in acute scrotum cases. New J Urol. 2021;16(3):207-14.
Download attachments: 10.4328.ACAM.22807
Mustafa Tuşat. The Role of NLR, PLR, and hematologic parameters in the differential diagnosis of pediatric acute scrotum. Ann Clin Anal Med 2025;16(9):642-645
10.4328/ACAM.22807Citations in Google Scholar: Google Scholar
This work is licensed under a Creative Commons Attribution 4.0 International License. To view a copy of the license, visit https://creativecommons.org/licenses/by-nc/4.0/
Expert perspectives on the use of artificial intelligence in surgical endoscopy: Ethical and legal considerations in Türkiye
Mehmet Alperen Avcı, Can Akgün
Department of General Surgery, Faculty of Medicine, Samsun University, Samsun, Türkiye
DOI: 10.4328/ACAM.22812 Received: 2025-07-16 Accepted: 2025-08-19 Published Online: 2025-08-29 Printed: 2025-09-01 Ann Clin Anal Med 2025;16(9):646-652
Corresponding Author: Mehmet Alperen Avcı, Department of General Surgery, Faculty of Medicine, Samsun University, Samsun, Türkiye. E-mail: mehmet.avci@samsun.edu.tr P: +90 553 215 86 98 Corresponding Author ORCID ID: https://orcid.org/0000-0003-3911-2686
Other Authors ORCID ID: Can Akgün, https://orcid.org/0000-0002-8367-0768
This study was approved by the Ethics Committee of Samsun University (Date: 2025-04-30, No: GOKAEK 2025/9/14).
Aim: Numerous AI-assisted endoscopic technologies have been developed to overcome the major limitations encountered in endoscopic procedures. However, the introduction of these new technologies has raised various concerns regarding ethical, legal, and accountability-related issues. In this study, we aimed to evaluate the ethical concerns and general perspectives of general surgery specialists and subspecialists in Türkiye regarding the use of AI in endoscopic procedures.
Materials and methods:This 31-item survey study, which assessed demographic characteristics, ethical concerns, and general perspectives, was conducted across Türkiye between May 1 and June 30, 2025.
Results: It was observed that the majority of participants had limited knowledge and a moderate level of concern regarding the use of artificial intelligence (AI) in endoscopic patient management, as well as the associated ethical and legal regulations. Participants emphasized that AI should be used as an assistive tool, provided that appropriate oversight and training are in place. Gastrointestinal surgeons expressed significantly greater concern about potential errors that AI systems might generate, whereas surgical oncologists were more supportive of using AI as an assistive tool. Meanwhile, general surgeons more prominently highlighted the necessity of formal education on AI.
Discussion: General surgery specialists and subspecialists in Türkiye have expressed their ethical and legal concerns regarding the use of artificial intelligence (AI) in endoscopy, along with the perceived need for regulation and education. It is of great importance to consider the perspectives of these specialists during the development and clinical integration of AI systems.
Keywords: endoscopy, ethics, artificial intelligence, liability
Introduction
Artificial intelligence (AI) refers to machine or computer systems that autonomously perform tasks by mimicking human cognitive functions such as learning, problem-solving, and decision-making [1]. Initially developed to assist in diagnostic processes in radiology, AI’s major advantages include its ability to recognize structures and objects with high sensitivity and specificity, provide rapid algorithm-based reports, and deliver highly consistent results [2]. With rapid technological advancements, AI has attracted substantial attention and has increasingly been integrated into various aspects of human life. Today, AI applications are widely utilized in many fields of medicine, including diagnosis, treatment, drug development, support for patient-physician interactions, and even medical education [3-4]. In recent years, AI has generated considerable interest in healthcare, leading to discussions about whether AI-powered “robot doctors” could eventually replace human physicians in the future [5].
The integration of AI into endoscopic procedures has emerged as one of the key focus areas in current research, and the advancement of AI applications in medicine continues to evolve through numerous clinical studies in this field [6]. The main challenges in endoscopic procedures are associated with the complexity of real-time image interpretation and the risk of human error. Some of the most significant limitations in contemporary endoscopic screening and surveillance include interobserver variability in lesion detection, time-consuming biopsy protocols, sampling errors, and the potential to miss subtle or early-stage premalignant lesions [7]. To address these challenges, AI-assisted endoscopy technologies have been developed, and numerous AI tools have been introduced to enhance lesion detection, characterization, and diagnostic decision-making in endoscopy [8-9].
However, as with all widely adopted technologies, the use of AI in healthcare has raised ethical concerns. Issues such as data privacy and security, lack of transparency, algorithmic bias, accountability, and the mechanization of healthcare services have been subjects of debate [10]. Although there is a growing consensus regarding the potential benefits of AI, ethical and legal concerns remain prominent. This survey-based study aimed to evaluate the ethical concerns and general perspectives regarding the use of AI in diagnosis, patient management, and treatment planning during endoscopic procedures, based on the opinions of actively practicing general surgeons and subspecialists performing endoscopy in Türkiye.
Materials and Methods
This survey-based study investigated the general and ethical perspectives of general surgeons on the use of artificial intelligence (AI) in diagnosis, patient management, and treatment planning during endoscopic procedures. The study was approved by the Non-Interventional Clinical Research Ethics Committee of Samsun University (Approval No: GOKAEK 2025/9/14, dated April 30, 2025) and was conducted in accordance with the ethical principles outlined in the Declaration of Helsinki. The survey was created using the Google Forms platform and was pre-tested by a group of 10 active general surgeons performing endoscopy who were not involved in the study. The final version of the questionnaire was distributed via email through the Turkish Surgical Society Secretariat and shared on social media platforms between May 1 and June 30, 2025, targeting active general surgeons, surgical oncologists, and gastrointestinal surgeons performing endoscopy across Türkiye.
Considering that approximately 3,500 active general surgeons and surgical subspecialists perform endoscopy in Türkiye, the sample size was calculated to achieve at least 157 participants for a reliable sample with an 80% confidence level and a 5% margin of error. To account for potential data loss, responses from at least 160 participants were targeted. The survey was originally developed and administered in Turkish. An English translation of the survey was provided as supplementary material (Table I). Informed consent was obtained from all participants, and participant confidentiality and anonymity were maintained throughout the study. Those who did not provide consent were not allowed to access the survey. The questionnaire consisted of 31 questions. The first six questions collected demographic information, including age, gender, years of professional experience (<5 years, 6–10 years, 11–15 years, 16–20 years, 21–25 years, and >25 years), academic title (Specialist, Assistant Professor, Associate Professor, and Professor), type of institution (State Hospital, University Hospital, Private Hospital, and Private Clinic), and surgical specialty or subspecialty (General Surgery, Surgical Oncology, and Gastrointestinal Surgery). Participants were then asked to respond to 25 questions related to three main topics: the use of AI in diagnosis, patient management, and treatment planning in endoscopy, as well as associated ethical concerns. The questions in these main sections included 17 items rated on a five-point Likert scale, while the supplementary section included additional questions using two- to four-point Likert scales (Table I).
Descriptive statistics were used to summarize the characteristics of the study groups. Quantitative variables were presented as mean ± standard deviation (x ± SD), and categorical variables were expressed as counts (n) and percentages (%). Differences between groups were analyzed using Chi-square tests. A p-value of less than 0.05 was considered statistically significant. All statistical analyses were performed using a commercial statistical software package (IBM SPSS Statistics for Windows, Version 22.0; IBM Corp., Armonk, NY, USA)
Ethical Approval
This study was approved by the Ethics Committee of Samsun University (Date: 2025-04-30, No: GOKAEK 2025/9/14).
Results
A total of 167 participants completed the survey within the specified study period. The female-to-male ratio among the participants was 21/146. The mean age of the participants was 42.12 ± 10.11 years. The majority of participants were general surgeons (n = 134, 80.2%) and those working in state hospitals (n = 90, 53.9%). The descriptive characteristics of the participants are presented in Table 2. A total of 49.7% of participants reported having a moderate level of knowledge about artificial intelligence (AI) applications (Question A1). Additionally, 44.9% stated that they had limited knowledge regarding the use of AI in diagnosis, patient management, and treatment planning during endoscopic procedures (Question A2). Furthermore, 43.7% of the participants indicated that they had no knowledge of the ethical principles and legal regulations concerning AI use either in Türkiye or globally (Question A6).
The majority of participants expressed a moderate level of concern regarding the ethical, legal, and data security aspects of AI applications (Question B). Furthermore, 52.7% of participants agreed that AI applications could serve as a supportive tool in physicians’ decision-making processes (Question C1). Most participants strongly agreed that AI applications should be subject to oversight, that formal training should be provided regarding their use, and that detailed information should be communicated to patients (Questions C2–C6) (Figure I).
Regarding ethical concerns, 73.1% of participants agreed that AI applications could potentially raise ethical issues in certain situations (Question D2), and 77.8% supported the idea that AI should be used solely as an assistive tool (Question D3). Moreover, 47.9% agreed that shared responsibility should apply in cases where AI systems produce erroneous outcomes (Question D4), and 32.3% agreed that AI could serve as a bridge in the physician-patient relationship (Question D6).
Additionally, the majority of participants agreed that there is a lack of sufficient legal regulations regarding patient privacy and AI in Türkiye, and expressed the need for education on AI use and ethical principles (Questions D5, D7–D8).
When the survey responses were analyzed according to participants’ gender, years of professional experience, academic title, type of institution, and general surgery subspecialty, no statistically significant differences were observed in any question based on academic title or type of institution. However, statistically significant differences were identified in six questions based on gender, years of professional experience, and general surgery subspecialty (p < 0.05).
Concerns regarding algorithmic bias and erroneous responses of AI systems were significantly higher among male participants (p = 0.007) and gastrointestinal surgeons (p = 0.046), with these groups more frequently selecting the “quite concerned” option (Question B2). The rate of participants who had never used AI applications was significantly higher among male surgeons (p = 0.038) and those with more than 26 years of professional experience (p = 0.009) (Question D1). The belief that AI applications would inevitably cause ethical issues was significantly lower among participants with 6–10 years (2.6%) and 21–25 years (30.8%) of professional experience (p = 0.040) (Question D2). Agreement with the statement that AI applications could assist physicians in the decision-making process was significantly higher among surgical oncologists (p = 0.006) (Question C1). Notably, gastrointestinal surgeons more frequently selected “strongly disagree” regarding the necessity of comprehensive pre-implementation testing for AI applications (p = 0.011); however, across all groups, the “strongly agree” option had the highest proportional response rate (Question C2). Regarding the necessity of providing regular training on AI use, agreement rates were significantly higher among surgical oncologists, while strong agreement was significantly more frequent among general surgeons (p = 0.012) (Question C3) (Table 3).
Discussion
In our study, the majority of participants reported having little or no knowledge regarding the use of artificial intelligence (AI) in endoscopic diagnosis, patient management, treatment planning, and the related ethical and legal regulations. Most participants expressed moderate concerns regarding ethical issues, legal liability, and data security associated with AI applications. There was a high level of agreement among participants regarding the use of AI as an assistive tool in clinical decision-making, the need for regulatory oversight, the importance of education on AI use and ethics, and the necessity of informing patients in detail. Male participants demonstrated significantly lower rates of AI use but expressed significantly greater concerns about erroneous outputs of AI systems. Participants with 21–25 years of professional experience were significantly less likely to believe that AI would inevitably cause ethical issues. Concerns about erroneous responses from AI applications were significantly higher among gastrointestinal surgeons, while surgical oncologists strongly agreed on the necessity of AI as an assistive tool, and general surgeons emphasized the need for regular training on AI use.
With advances in medicine, there has been a growing need for innovative technologies and new methods to address diagnostic, treatment, and management gaps in gastrointestinal (GI) diseases. Owing to its strong data analysis capacity and increasing popularity, AI has shown high potential in addressing these challenges [11]. Deep learning-based imaging and detection algorithms have been integrated into wearable medical devices for early disease alerts and healthcare management [12]. Consequently, some AI algorithms have begun to enter routine clinical practice in upper and lower GI endoscopy, assisting with early diagnosis, differential diagnosis, and assessment of tumor invasion depth [13–15].
However, despite progress in AI applications in endoscopy and other medical fields, issues such as data standardization and sharing, algorithm interpretability and safety, and unresolved ethical and legal concerns remain [11]. Particularly in the diagnosis and treatment of GI cancers, the clinical utility of AI applications depends not only on high diagnostic accuracy but also on clinicians’ understanding of and confidence in these technologies [16]. The complexity of AI models, with millions of parameters, their reliance on specific datasets, limited applicability to real-world data, and the lack of continuous learning mechanisms, all hinder transparency and trust in decision-making processes. Moreover, since 2022, only approximately 43% of AI-based medical devices approved by the U.S. Food and Drug Administration (FDA) have published clinical validation data, and less than 5% have been validated through prospective randomized controlled trials [17]. This situation negatively affects clinicians’ trust in and acceptance of AI for diagnostic, therapeutic, and follow-up purposes [18–20]. In our study, most participants expressed moderate concerns about data security in AI applications. Male participants and gastrointestinal surgeons were significantly more concerned about erroneous responses from AI systems.
During GI endoscopy, the accuracy of optical diagnosis often falls below expectations and varies significantly among endoscopists. To address this, computer-aided diagnosis (CADx) using AI has been introduced. Recent prospective studies have shown that collaborative use of AI with human endoscopists does not significantly improve diagnostic accuracy compared to human judgment alone [21–22]. Furthermore, one randomized trial demonstrated that fully autonomous AI systems (without human assistance) outperformed human-AI collaboration in diagnostic accuracy [23]. Although autonomous AI tools are not yet widely adopted in endoscopy, many endoscopists perceive them as either too futuristic or legally problematic, emphasizing the need for strict oversight of every aspect of model development to ensure fairness, safety, efficacy, and ethical compliance. Regular audits, algorithm transparency, and clinician training are considered crucial for building trust and accountability [24–25]. In our study, the majority of participants agreed that AI should only serve as an assistive tool in clinical decision-making and that its applications should be subject to regulatory oversight.
Another critical aspect of using AI in clinical endoscopic practice involves ethical, legal, safety, and regulatory factors. In particular, the use of AI in healthcare raises concerns about patient privacy violations, algorithmic bias leading to unfair outcomes, and uncertainties regarding accountability for AI-driven decisions [11]. When an AI system provides a misdiagnosis, questions arise regarding whether responsibility lies with the clinician, the hospital, regulatory bodies, or the system developers. Moreover, trust in autonomous systems remains a major barrier, as delegating diagnostic authority to machines and excluding human involvement may cause significant discomfort among patients and healthcare professionals [24]. In our study, most participants reported moderate concerns regarding AI applications and indicated limited knowledge about the use of AI in endoscopic diagnosis, patient management, treatment planning, and relevant ethical and legal regulations. They also agreed on the need for education on AI use and ethics, as well as the necessity of detailed patient information. Participants with 21–25 years of professional experience were significantly less likely to believe that AI would inevitably cause ethical issues, and general surgeons strongly agreed on the need for regular training in AI applications.
Limitation
This study has several limitations. First, it was a content-focused survey conducted solely among general surgeons, gastrointestinal surgeons, and surgical oncologists, which may limit the generalizability of the findings to the broader healthcare system. Although the sample size was calculated appropriately, a larger national sample considering variables such as age, gender, years of professional experience, academic title, type of institution, and surgical specialty or subspecialty would have strengthened the study’s results. Furthermore, while the study assessed ethical concerns, larger-scale approaches such as the Delphi method could have been used for a more comprehensive evaluation; however, this would have made it more difficult to reach a sufficiently large sample.
Conclusion
In conclusion, this study revealed that general surgery specialists and subspecialists in Türkiye have limited knowledge regarding the use of AI applications in endoscopy and the related ethical and legal regulations. They also expressed concerns about trust, ethical, and legal issues. There was a consensus that AI should be used solely as an assistive tool in clinical decision-making, that its applications should be regularly audited, and that formal training should be provided. Moving forward, it is essential to incorporate the perspectives of general surgery specialists and subspecialists—who are potential end users—into the development of these systems to comprehensively address issues related to trust, legal and ethical considerations, oversight, and education.
Scientific Responsibility Statement
The authors declare that they are responsible for the article’s scientific content including study design, data collection, analysis and interpretation, writing, some of the main line, or all of the preparation and scientific review of the contents and approval of the final version of the article.
Animal and Human Rights Statement
All procedures performed in this study were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki Declaration and its later amendments or compareable ethical standards.
Funding: None
Conflict of Interest
The authors declare that there is no conflict of interest.
References
1. Chahal D, Byrne MF. A primer on artificial intelligence and its application to endoscopy. Gastrointest Endosc. 2020;92(4):813-20.e4.
2. Hamet P, Tremblay J. Artificial intelligence in medicine. Metabolism. 2017;69(Suppl):36-40.
3. Luo R, Zeng Q, Chen H. Artificial intelligence algorithm-based MRI for differentiation diagnosis of prostate cancer. Comput Math Methods Med. 2022;2022:8123643.
4. Cheng G, Zhang F, Xing Y, et al. Artificial intelligence-assisted score analysis for predicting the expression of the immunotherapy biomarker PD-L1 in lung cancer. Front Immunol. 2022;13:893198.
5. Jiang F, Jiang Y, Zhi H, et al. Artificial intelligence in healthcare: past, present and future. Stroke Vasc Neurol. 2017;2(3):230-43.
6. Plana D, Shung DL, Grimshaw AA, Saraf A, Sung JJY, Kann BH. Randomized clinical trials of machine learning interventions in health care: a systematic review. JAMA Netw Open. 2022;5(12):2233946.
7. Weusten B, Bisschops R, Coron E, et al. Endoscopic management of Barrett’s esophagus: European Society of Gastrointestinal Endoscopy (ESGE) position statement. Endoscopy. 2017;49(2):191-8.
8. Ebigbo A, Mendel R, Probst A, et al. Real-time use of artificial intelligence in the evaluation of cancer in Barrett’s oesophagus. Gut. 2020;69(4):615-6.
9. Li L, Chen Y, Shen Z, et al. Convolutional neural network for the diagnosis of early gastric cancer based on magnifying narrow band imaging. Gastric Cancer. 2020;23(1):126-32.
10. Koçer Tulgar Y, Tulgar S, Güven Köse S, et al. Anesthesiologists’ perspective on the use of artificial intelligence in ultrasound-guided regional anaesthesia in terms of medical ethics and medical education: a survey study. Eurasian J Med. 2023;55(2):146-51.
11. Gao Y, Wen P, Liu Y, et al. Application of artificial intelligence in the diagnosis of malignant digestive tract tumors: focusing on opportunities and challenges in endoscopy and pathology. J Transl Med. 2025;23(1):412.
12. N. Wang J, Zhou G, Dai J. Xie Y. Energy-efficient intelligent ECG monitoring for wearable devices. IEEE Trans Biomed Circuits Syst. 2019;13(5):1112-21.
13. Wallace MB, Sharma P, Bhandari P, et al. Impact of artificial intelligence on miss rate of colorectal neoplasia. Gastroenterology. 2022;163(1):295-304.e5.
14. Tang D, Wang L, Jiang J, et al. A novel deep learning system for diagnosing early esophageal squamous cell carcinoma: a multicenter diagnostic study. Clin Transl Gastroenterol. 2021;12(8):00393.
15. Kudo SE, Misawa M, Mori Y, et al. Artificial intelligence-assisted system improves endoscopic identification of colorectal neoplasms. Clin Gastroenterol Hepatol. 2020;18(8):1874-81.
16. Hatherley J, Sparrow R, Howard M. The virtues of interpretable medical AI. Camb Q Healthc Ethics. 2024;33(3):323-32.
17. Chouffani El Fassi S, Abdullah A, Fang Y, et al. Author correction: not all AI health tools with regulatory authorization are clinically validated. Nat Med. 2024;30(11):3381.
18. Sidak D, Schwarzerová J, Weckwerth W. Interpretable machine learning methods for predictions in systems biology from omics data. Front Mol Biosci. 2022;9:926623.
19. Gimeno M, Sada Del Real K, Rubio A. Precision oncology: a review to assess interpretability in several explainable methods. Brief Bioinform. 2023;24(4):bbad200.
20. Kirkpatrick J, Pascanu R, Rabinowitz N, et al. Overcoming catastrophic forgetting in neural networks. Proc Natl Acad Sci USA. 2017;114(13):3521-6.
21. Rondonotti E, Hassan C, Tamanini G, et al. Artificial intelligence-assisted optical diagnosis for the resect-and-discard strategy in clinical practice: the Artificial intelligence BLI Characterization (ABC) study. Endoscopy. 2023;55(1):14-22.
22. Rex DK, Bhavsar-Burke I, Buckles D, et al. Artificial intelligence for real-time prediction of the histology of colorectal polyps by general endoscopists. Ann Intern Med. 2024;177:911-8.
23. Djinbachian R, Haumesser C, Taghiakbari M, et al. Autonomous artificial intelligence vs artificial intelligence-assisted human optical diagnosis of colorectal polyps: a randomized controlled trial. Gastroenterology. 2024;167(2):392-9.e2.
24. Schulz PJ, Lwin MO, Kee KM. Modeling the influence of attitudes, trust, and beliefs on endoscopists’ acceptance of artificial intelligence applications in medical practice. Front Public Health. 2023;11:1301563.
25. Cross JL, Choma MA, Onofrey JA. Bias in medical AI: implications for clinical decision-making. PLOS Digit Health. 2024;3(11):0000651.
Download attachments: 10.4328.ACAM.22812
How to cite this article: Mehmet Alperen Avcı, Can Akgün. Expert perspectives on the use of artificial intelligence in surgical endoscopy: Ethical and legal considerations in Türkiye. Ann Clin Anal Med 2025;16(9):646-652
Citations in Google Scholar: Google Scholar
This work is licensed under a Creative Commons Attribution 4.0 International License. To view a copy of the license, visit https://creativecommons.org/licenses/by-nc/4.0/
Medicolegal evaluation of hanging cases (Sanliurfa-Turkey)
Ugur Demir, Hüseyin Kafadar
Department of Forensic Medicine, Faculty of Medicine, Harran University, Sanliurfa, Turkey
DOI: 10.4328/ACAM.22817 Received: 2025-07-19 Accepted: 2025-08-19 Published Online: 2025-08-30 Printed: 2025-09-01 Ann Clin Anal Med 2025;16(9):653-657
Corresponding Author: Ugur Demir, Department of Forensic Medicine, Faculty of Medicine, Harran University, Sanliurfa, Turkey. E-mail: ugurdmr81@gmail.com P: +90 414 344 44 44 Corresponding Author ORCID ID: https://orcid.org/0000-0003-3266-2861
Other Authors ORCID ID: Hüseyin Kafadar, https://orcid.org/0000-0002-6844-7517
This study was approved by the Ethics Committee of the Clinical Research Ethics Committee of Harran University Rectorate (Date: 2024-11-18, No: E-76244175-050.04-393108)
Aim: Hanging is a common and highly lethal method of suicide worldwide. This study aims to describe the demographic, clinical, and outcome characteristics of hanging cases admitted to Şanlıurfa Harran University Hospital over 10 years.
Materials and methods: A retrospective review was conducted on 34 patients admitted between 2015 and 2024 due to hanging incidents. Data on age, sex, psychiatric history, Glasgow Coma Scale (GCS) at admission, materials used, and outcomes were analyzed.
Results: The mean age was 26.7 years, with 64.7% male patients. Suicide attempts accounted for 82.4% of cases. Half of the patients died either at admission or during hospitalization. A low GCS score (3–8) was strongly associated with mortality. Notably, 38.2% of cases were under 18 years old. Rope was the most frequently used material. Psychiatric history was documented in a minority of cases, and all surviving patients received psychiatric evaluation before discharge.
Discussion: Hanging remains a major public health concern due to its high mortality rate, particularly affecting younger populations. Early clinical assessment using GCS is critical for prognosis. Enhanced mental health support and prevention efforts targeting vulnerable populations are urgently needed. Larger studies are needed to better understand this issue and to help improve patient care.
Keywords: hanging, asphyxia, forensic medicine, suicide attempt
Introduction
Suicide is a multifaceted global public health issue, with close to 800,000 deaths reported annually worldwide [1]. This type of behavior stems from a deeply complex and multifactorial process, shaped by the interplay of biological, psychological, social, cultural, historical, existential, religious, philosophical, and economic influences. This behavior is not limited to one specific group of people; it can affect a broad spectrum of individuals, ranging from those with serious mental health disorders to otherwise healthy people who face difficult life situations and stress [1,2]. These unexpected deaths, which most commonly occur among young and middle-aged adults, impose significant economic, social, and psychological burdens on individuals, families, and communities [1,3].
Hanging is among the most frequently employed methods of suicide in both developed and developing countries [1,3,4,5]. The preference for this method is influenced by factors such as easy access to ligatures, high lethality, availability of natural or artificial suspension points, and the common perception that hanging results in a painless, rapid, and bloodless death [6-8]. A study examining suicide trends in Türkiye over a 25-year period reported that hanging accounted for 47.5% of all suicides, making it the leading method across all age groups and both sexes [6].
Death due to hanging may result from venous and arterial obstruction, airway blockage, spinal cord injury, or reflex cardiac arrest caused by carotid artery compression due to the pressure exerted on the neck structures [9-11]. The hanging process is typically divided into two phases: an early phase characterized by dyspnea, convulsions, and apnea, and a late phase marked by cardiac arrest. The clinical outcome depends largely on the degree of hypoxic brain injury sustained. In mild cases, full recovery without neurological sequelae is possible, whereas in severe cases, patients may progress to a persistent vegetative state or death [9,10]. Hanging can be classified as either complete, where both feet are off the ground, or partial, in which one or both feet or other body parts maintain contact with a surface, such as kneeling, sitting, or lying positions [8,10,11].
Fractures of the neck structures primarily occur in blunt trauma cases such as hanging, with the hyoid bone and thyroid cartilage most frequently affected due to the ligature pressure, followed by the cricoid cartilage and posterior cervical vertebrae [7,8,10] Hyoid bone fractures may involve any of its five anatomical segments or multiple segments simultaneously; however, in compression injuries caused by hanging, these fractures localize predominantly in the greater cornua [8,12].
This study presents a retrospective analysis of hanging cases admitted to Şanlıurfa Harran University Hospital over ten years. It aims to evaluate the demographic and clinical characteristics of the patients, including age and gender distribution, Glasgow Coma Scale (GCS) scores at admission, hospitalization duration, presence of psychiatric or substance use history, and mortality outcomes. Additionally, the study examines the materials used for hanging, the anatomical findings such as hyoid bone fractures, and the locations where the incidents occurred
Materials and Methods
Patient information such as age, gender, history of coexisting psychiatric conditions, the method employed in hanging, and length of hospital stay were retrieved from the Şanlıurfa Harran University Hospital’s Hospital Information Management System (FONET). A retrospective review of hospital records spanning from 2015 to 2024 was conducted, during which cases involving hanging were identified and incorporated into the study.
According to the admission history, cases presenting with hanging were divided into three groups: suicide attempts, accidents, and unknowns.
At the time of hospital admission, patients were divided into 3 groups according to Glasgow Coma Score (GCS): GKS 3-8 (severe), GKS 9-12 (moderate), and GKS 13-15 (mild).
Statistical analysis
All statistical evaluations were performed using IBM SPSS Statistics for Windows, version 21.0 (IBM Corp., Armonk, NY, USA). Continuous variables were summarized as means with standard deviations (SD), while categorical variables were expressed as frequencies and percentages. For contingency table analyses involving cell counts greater than five, the Chi-square test was applied. In cases where the expected cell count was below five, Fisher’s Exact Test was utilized to assess associations between categorical variables. A p-value less than 0.05 was considered statistically significant throughout the analyses.
Ethical approval
This study was approved by the Ethics Committee of the Clinical Research Ethics Committee of Harran University Rectorate (Date: 2024-11-18, No: E-76244175-050.04-393108).
Results
Between 2015 and 2024, a total of 34 patients presented to our hospital due to hanging. Among these patients, 22 (64.7%) were male with a mean age of 26.7 ± 14.23 years (range: 3–68 years), and 12 (35.3%) were female with a mean age of 27.08 ± 22.01 years (range: 14–86 years).
When the cases presenting with a history of hanging were analyzed according to age groups, 38.2% (n=13) were under 18 years of age, 44.1% (n=15) were between 18 and 35 years, 8.8% (n=3) were between 36 and 50 years, and 8.8% (n=3) were over 50 years. The highest incidence of presentations was observed in the 18–35 age group, followed by the pediatric group under 18 years. Individuals in the young adult and pediatric age groups constituted 82.3% of all cases. Notably, the adolescent group (defined as ages 12–18) accounted for 38.2% (n=13) of cases, indicating a conspicuously high number within this age range. Additionally, one case (2.9%) involved a 3-year-old child in the early pediatric age group, which was considered a subgroup requiring special attention.
At the time of hospital admission, traumatic lesions in regions other than the neck were detected in four cases. Among the hanging cases analyzed, 82.4% (n=28) were classified as suicide attempts, 5.9% (n=2) as accidental, and 11.8% (n=4) as cases with undetermined cause.
The average length of stay in the intensive care unit was 17.97 ± 48.3 days, ranging from 1 to 234 days.
Psychiatric history was present in three cases, substance abuse history in three cases, and non-psychiatric (somatic) medical history in two cases. It was noted that psychiatric consultations were requested for all 17 cases discharged in a healthy condition during their hospital stay.
Upon initial hospital admission, patients were categorized according to the Glasgow Coma Scale (GCS) into three groups: severe (GCS 3–8), moderate (GCS 9–12), and mild (GCS 13–15). At admission, 26 patients were classified as severe, 3 as moderate, and 5 as mild. Overall, 50% of the patients (n=17) died either at the time of admission or during hospitalization. Among those who died, 16 had an initial GCS score between 3 and 8, while only one patient had a GCS score between 9 and 12.
Regarding the locations where the hanging incidents occurred, 9 cases (26,5%) took place in enclosed spaces (home, dormitory, prison), 9 cases (26,5%) in open spaces, and for 16 cases (47,0%), information on the location was unavailable.
In terms of hanging materials, 8 cases (23,5%) involved the use of rope, and 1 case (2,9%) involved the use of a sheet. Data on hanging materials were not available for the remaining 25 cases.
Statistical significance was observed between GCS values and mortality (p=0.035). However, the comparisons among the remaining groups did not yield statistically significant results.
Discussion
Globally, for every suicide death among women, approximately two suicide deaths are reported among men. Although suicide attempts are 2 to 4 times more frequent in women compared to men, the higher lethality of methods preferred by men partially explains this reversed mortality ratio [1]. In Turkey, over the past decade, the rate of male suicides by hanging has been 2 to 4 times higher than that of females. According to data from the Turkish Statistical Institute, hanging has been the most common suicide method between 2000 and 2023. In 2023 alone, 79,4% of deaths by hanging were male. During the same period, the highest suicide rates were observed in the 25–29 age group [3]. In our study, the mean age of 34 patients was 26.82 ± 14.23 years, and 64,7% (n=22) were male. These figures are consistent with the study by Lapo-Talledo et al. [13] in Ecuador, where the mean age was 34.15 ± 18.42 years and 79,5% of cases were male. In a study conducted in the United States by Choi et al. [4], hanging was most common among males aged 25–44 and females aged 18–24, with 78,3% of the cases being male. Gökçek et al. [14] reported that 84,2% of the cases in their Turkish study were male, with a median age of 31. These findings indicate that the male predominance and age distribution observed in our study align with both national and international literature.
The association between suicide and psychiatric disorders—particularly depression and substance use disorders—is well-documented, especially in high-income countries [1,15]. Factors such as young age, unemployment and economic hardship, a family history of suicide, previous suicide attempts, psychiatric illnesses, substance abuse, hopelessness, social isolation, limited access to treatment, and availability of lethal means increase the risk of suicide [16]. In our study, 6 cases (17.7%) had a documented history of psychiatric disorder and/or substance use. In the study by Choi et al. [4], 61.7% of 26,879 hanging/strangulation cases had a psychiatric disorder or substance misuse. The lower rates in our study may be attributed to data limitations, mental health stigma, and lower rates of psychiatric consultation due to sociocultural factors.
Mortality rates in hanging cases vary between 69% and 84% in the literature [5,11,17]. In our study, 76,5% (n=26) of the 34 patients presented with a Glasgow Coma Scale (GCS) score <8, and 50% (n=17) died either at presentation or during hospitalization. In the study by Ribaute et al. [18], the rate of GCS <8 was 40,1%, and the mortality rate was 21%. In a study by Jawaid et al. [19] involving 101 patients, the GCS <8 rate was 55,4%, with a mortality rate of 5,9%. In Berke et al.’s [16] study, 19,4% of patients with a GCS <8 died, with most of these patients having a GCS of 3. In the study by Biswas et al. [20], the GCS <8 rate was 36%, and the mortality rate was 24%. These findings confirm the high mortality commonly observed in hanging cases, especially among individuals with a GCS <8, supporting the conclusion that GCS is a critical predictor of mortality.
In hanging suicide attempts, indoor environments are most frequently chosen, likely due to privacy and easy access to binding materials [21]. In our study, 26,5% (n=9) of incidents occurred indoors, another 26,5% occurred in open areas, and location data were unavailable for 47.06% of cases. In a study from Turkey by Özer et al. [7], 60,4% occurred at home and 18.7% in open areas. In the U.S.-based study by Choi et al. [4], 71,3% of cases occurred at home. The relatively low percentage of indoor cases in our study may be attributed to the high rate of missing location data (47,0%).
Ropes and clothing items are the most commonly used materials in hanging suicide attempts [22]. In our study, the use of rope was found in 23,5% (n=8) of cases, and sheet use in 2,9% (n=1). Akber et al. showed that common ligature materials in hanging suicides include fabrics (45.6%), jute ropes (33.7%), and nylon ropes (20.7%) [23]. In the study by Shabnam et al. [24], 41,9% of ligature materials were ropes and 5,6% were nylon ropes. Özer et al. [7] reported that 62,5% were nylon ropes and 12,5% were sheets, while Chacko et al. [8] found that 46,3% were nylon ropes. These findings are consistent with our study and support the notion that rope use is a universal preference in hanging attempts.
In hanging cases, fracture of the hyoid bone is a significant forensic finding [7,25]. In our study, a hyoid bone fracture was observed in only one case (2.94%), which falls within the range of 0–83.3% reported in the literature [7,25]. Our findings are consistent with those of Roy et al. [12].
Limitation
A primary constraint of this study was the small patient cohort available for analysis. The small number of cases limited the statistical power of the study, preventing meaningful significance in the data distribution. An additional shortcoming of the study was the observation that in suicide cases involving hanging, most individuals are typically found deceased at the scene, which results in a low number of hospital admissions and limits the availability of detailed clinical information. Lastly, since the research was carried out at a single tertiary care center located in southeastern Turkey, approximately 18 kilometers from urban residential zones, broader multicenter studies are necessary to obtain more generalizable findings that accurately represent nationwide trends.
Conclusion
This study examined 34 hanging cases treated at Şanlıurfa Harran University Hospital over 10 years. Most patients were young adults, with a mean age of 26.7 years, and men made up nearly two-thirds of the cases. The majority (82.4%) were suicide attempts, and half of the patients died either at admission or during treatment. A low Glasgow Coma Scale (GCS) score (3–8) at admission was strongly linked to higher mortality. Notably, a significant number of cases (38.2%) were under 18 years old, highlighting the vulnerability of younger individuals.
Rope was the most commonly used material in hanging attempts, while fractures of neck structures were rare. Although few patients had recorded psychiatric or substance use histories, all survivors received psychiatric evaluation before discharge. These findings highlight hanging as a serious public health problem with high fatality rates, especially among young people. Improved mental health services and prevention strategies are urgently needed. Larger studies are recommended to better understand hanging cases and improve patient outcomes.
Scientific Responsibility Statement
The authors declare that they are responsible for the article’s scientific content including study design, data collection, analysis and interpretation, writing, some of the main line, or all of the preparation and scientific review of the contents and approval of the final version of the article.
Animal and Human Rights Statement
All procedures performed in this study were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki Declaration and its later amendments or compareable ethical standards.
Funding: None
Conflict of Interest
The authors declare that there is no conflict of interest.
References
1. Bachmann S. Epidemiology of Suicide and the Psychiatric Perspective. Int J Environ Res Public Health. 2018;15(7):1425.
2. Gürü M, Özensoy HS. Patterns and outcomes of traumatic suicides: a retrospective study of 132 patients admitted to a turkish medical center. Med Sci Monit. 2024;30:e943505.
3. GBD 2021 Suicide Collaborators. Global, regional, and national burden of suicide, 1990-2021: A systematic analysis for the Global Burden of Disease Study 2021. Lancet Public Health. 2025;10(3):e189-e202.
4. Choi NG, Marti CN, Choi BY. Three leading suicide methods in the United States, 2017-2019: Associations with decedents’ demographic and clinical characteristics. Front Public Health. 2022;10:955008.
5. Cai Z, Junus A, Chang Q. The lethality of suicide methods: A systematic review and meta-analysis. J Affect Disord. 2022;300:121-9.
6. Kartal E, Demir U, Hekimoğlu Y. Suicides in Turkey: 25-year trend (1995–2019) (1995–2019). J Forensic Sci. 2022;67(5):1858-66.
7. Ozer E, Yildirim A, Kırcı GS. Deaths as a result of hanging. Biomed Res. 2017;28(2):556-61.
8. Chacko A, Gupta C, Palimar V. Epidemiological, gross morphological, and histopathological analysis of postmortem cases of hanging-an observational study. F1000Res. 2025;13:1-14.
9. Dorfman JD. Near Hanging: Evaluation and management. Chest. 2023;163(4):855-60.
10. Kannamani B, Sahni N, Bandyopadhyay A. Insights into pathophysiology, management, and outcomes of near-hanging patients: A narrative review. J Anaesthesiol Clin Pharmacol. 2024;40(4):582-7.
11. Coombs AE, Ashton-Cleary D. Hanging and near-hanging. BJA Educ. 2023;23(9):358-63.
12. Roy K, Panja S, Debnath BJ. Profıle of neck structure ınjury patterns ın cases of hangıng–An autopsy based study. J Forensic Med Toxicol. 2024;41(1):64-7.
13. Lapo-Talledo GJ, Talledo-Delgado JA, Portalanza D. Suicide rates in Ecuador: A nationwide study from 2011 until 2020. J Affect Disord. 2023;320:638-46.
14. Gökçek MB, Aslaner H, Çetin A. Evaluation of suicide cases resulting in death in Kayseri in 2019. J Health Sci. 2023;32(1):29-33.
15. Knipe D, Williams AJ, Hannam-Swain S, et al. Psychiatric morbidity and suicidal behaviour in low- and middle-income countries: A systematic review and meta-analysis. PLoS Med. 2019;16(10):e1002905.
16. Yıldırım Öztürk, EN, Öztürk M. The crude incidence rate of suicide and related factors in Turkey between 2009 and 2018. Deu Med J. 2021;35(1):23-32.
17. Berke DM, Helmer SD, Reyes J. Injury patterns in near-hanging patients: how much workup is really needed? Am Surg. 2019;85(5):549-55.
18. Ribaute C, Darcourt J, Patsoura S, et al. Should CT angiography of the supra-aortic arteries be performed systematically following attempted suicide by hanging? J Neuroradiol. 2021;48(4):271-6.
19. Jawaid MT, Amalnath SD, Subrahmanyam DKS. Neurological outcomes following suicidal hanging: A prospective study of 101 patients. Ann Indian Acad Neurol. 2017; 20(2):106-8.
20. Biswas S, Rhodes H, Petersen K. Hanging in there. Living beyond hanging: a retrospective review of the prognostic factors from a regional trauma center. Panam J Trauma Crit Care Emerg Surg. 2020;9(3):218-22.
21. Tulapunt N, Phanchan S, Peonim V. Hanging fatalities in Central Bangkok, Thailand: a 13-year retrospective study. Clin Med Insights Pathol. 2017;23;(10):1179555717692545.
22. Bhosle SH, Zanjad NP, Dake MD. Deaths due to hanging among adolescents–a 10-year retrospective study. J Forensic Leg Med. 2015;29:30-3.
23. Akber EB, Sultana S, Jahan I. Hanging: commonly encountered by forensic experts at autopsy. Mymensingh Med J. 2023;32(4):1058-63.
24. Shabnam S, Naiem J, Islam MS. Forensic analysis of suicidal hanging cases: study in a district hospital. Saudi J Med. 2022;7(7):363-6.
25. Khokhlov, V. D. Trauma to the hyoid bone and laryngeal cartilages in hanging: review of forensic research series since 1856. Leg Med. 2015;17(1):17-23.
Download attachments: 10.4328.ACAM.22817
Ugur Demir, Hüseyin Kafadar, Medicolegal Evaluation of Hanging Cases (Sanliurfa-Turkey). Ann Clin Anal Med 2025;16(9):653-657
Citations in Google Scholar: Google Scholar
This work is licensed under a Creative Commons Attribution 4.0 International License. To view a copy of the license, visit https://creativecommons.org/licenses/by-nc/4.0/
An evaluation of the prevalence of keratoconus and corneal topographic alterations following the 2023 Turkish earthquake
Mübeccel Bulut 1, Ali Hakim Reyhan 2
1 Department of Ophthalmology, Necip Fazıl City Hospital, Kahramanmaraş, 2 Department of Ophthalmology, Faculty of Medicine, Harran University, Şanlıurfa, Turkey
DOI: 10.4328/ACAM.22822 Received: 2025-07-21 Accepted: 2025-08-25 Published Online: 2025-08-31 Printed: 2025-09-01 Ann Clin Anal Med 2025;16(9):658-662
Corresponding Author: Mübeccel Bulut, Department of Ophthalmology, Necip Fazıl City Hospital, Kahramanmaraş, Turkey. E-mail: mubeccelbagdas@gmail.com P: +90 552 405 88 30 Corresponding Author ORCID ID: https://orcid.org/0000-0003-1311-2282
Other Authors ORCID ID: . Ali Hakim Reyhan, https://orcid.org/0000-0001-8402-0954
This study was approved by the Ethics Committee of Harran University Clinical Research Ethics Committee (Date: 2024-12-30, No:24.21.30).
Aim: This study evaluated the effects of the devastating Turkish earthquake in 2023 on the prevalence of keratoconus and corneal topographic parameters, with a focus on identifying potential associations between earthquake-induced environmental and psychological stressors and changes in keratoconus presentations.
Materials and Methods: This retrospective analysis was conducted at a single center by comparing clinical data and corneal topography measured before and after the earthquake. Parameters evaluated included K1, K2, Kmax, corneal thickness, and asymmetry indices (the keratoconus index (KI), index of height of decentration (IHD), central keratoconus index (CKI), and Kmax-K2). Statistical analyses were performed to assess alterations in the prevalence of keratoconus and topographic indices.
Results: Although the overall increase in the prevalence of keratoconus from 14.0% (n=28) pre-earthquake to 21.0% (n = 42) post-earthquake was not statistically significant (p > 0.05), significant changes were observed in corneal asymmetry indices. Specifically, the KI increased at a value of p=0.03, the IHD at p=0.02, and the CKI at p=0.01. The difference between Kmax and K2 was also significant (p=0.04).
Discussion: The findings indicate a trend toward a higher prevalence of keratoconus coupled with significant alterations in corneal asymmetry measurements following the Turkish earthquake of 2023. These results support the hypothesis that earthquake-related environmental and psychological factors may affect the progression of keratoconus, underscoring the need for further multicenter, long-term studies to clarify these associations and underlying mechanisms.
Keywords: keratoconus, corneal topography, environmental factors, 2023 Turkish earthquake
Introduction
Keratoconus is an ophthalmic disease characterized by progressive thinning, increased curvature, and conical deformation of the cornea, affecting visual acuity. Numerous mechanisms are involved in the pathogenesis, including genetic factors, biomechanical disorders, and oxidative stress. While genetic predisposition occupies an important place in the emergence of the disease, decreased microstructural stability and the biomechanical properties of the cornea also contribute to its progression [1,2].Studies have revealed that oxidative stress plays a determinant role in the pathogenesis of keratoconus. Higher than normal free radical production and an insufficient antioxidant defense system in corneal tissue cause the tissue to be exposed to oxidative damage. This can contribute to the progression of the disease by leading to microstructural deterioration in corneal cells and stroma [2,3]. Additionally, oxidative stress triggering the release of proinflammatory cytokines can lead to increased local inflammation and further deterioration of the corneal structure [4].Acute and chronic stress conditions caused by major natural disasters such as earthquakes lead to increased systemic oxidative stress and proinflammatory responses. Researchers have suggested that, under such conditions, increased stress hormones (e.g., cortisol) and inflammatory mediators may accelerate the progression of chronic diseases such as keratoconus by weakening the microstructural integrity of corneal tissue [5]. Additionally, the indirect effects of psychological stress include eye rubbing or traumatic behaviors resulting from increased discomfort. These in turn can have additional destructive effects on the corneal structure due to mechanical trauma [2,5].In the wake of major natural disasters such as earthquakes, routine eye examinations and corneal topography imaging that monitor the progression of keratoconus may be delayed due to intense pressure on patients and healthcare infrastructure. Failure to diagnose early-stage keratoconus or disruption of regular monitoring programs may lead to the disease only being detected at more advanced stages and to limitations on treatment options.In the early stages of keratoconus, standard keratometry measurements, and especially corneal topography data, objectively reflect the micro-level asymmetry and irregularities of the disease. Corneal topography parameters are particularly important tools in determining the process of corneal conical deformation and thinning, as well as monitoring the progression of the disease [6]. In particular, changes in topography observed during examinations in periods of trauma or high stress, such as after an earthquake, can be critically important in objectively evaluating whether the disease is progressing.This study compared the changes in corneal topography parameters of patients with suspected keratoconus before and after the February 6, 2023, Turkish earthquake. It also set out to reveal the role of psychological stress factors on the clinical course of the disease by evaluating the potential effects of the earthquake on keratoconus pathogenesis and progression.
Materials and Methods
This retrospective and comparative cohort study was conducted in Kahramanmaraş, one of the regions in Türkiye most severely affected by the 2023 earthquake. Patients who presented for ophthalmological examinations in the pre- (June-August 2022) and post-earthquake (June-August 2023) periods were evaluated in the scope of the research. One hundred patients (200 eyes) in the pre-earthquake group and one hundred patients in the post-earthquake group, all with uncorrectable changes in visual acuity, corneal surface irregularities, and increased astigmatism detected during clinical examinations, were included in the study. Individuals in whom keratoconus was suspected as a result of abnormalities detected in standard keratometric measurements (K1 and K2 values) and who therefore underwent advanced corneal topographic examinations constituted the study population.All topographic measurements in the pre- and post-earthquake periods were performed by the same technician using a Pentacam HR (Oculus, Wetzlar, Germany) device. All patients were evaluated by the same ophthalmology specialist (M.B.).Patients with a history of corneal surgery, corneal trauma, corneal infection, contact lens use, or systemic diseases such as connective tissue disorders were excluded from the study. Demographic data such as age and gender, and clinical data including visual acuity, refraction values, intraocular pressure, biomicroscopic examination findings, and fundus examination findings were recorded. Best-corrected visual acuity (BCVA) was quantified as a decimal value based on the ratio of the test distance to the standard distance for the letter read on the Snellen chart.Patients who underwent topography due to suspicion of keratoconus were diagnosed according to the Amsler-Krumeich diagnostic classification. These were divided into four stages using eccentric corneal steepening, degree of myopia/astigmatism, mean central K value, and corneal thickness parameters.Various corneal topography parameters were measured. Anterior surface parameters included Q front, representing the anterior surface asphericity; K1 front, indicating the anterior surface flat keratometry; K2 front, denoting the anterior surface steep keratometry; and Kmax front, the anterior surface maximum keratometry. Additionally, the difference between the maximum and steep keratometry is expressed as Kmax-k2, while Astig front defines the anterior surface corneal astigmatism. The overall anterior curvature is further characterized by Km front, the mean keratometry of the anterior surface. For the posterior surface, the parameters included Q back for posterior surface asphericity; K1 back and K2 back, representing the posterior surface flat and steep keratometry, respectively; Kmax back as the posterior surface maximum keratometry; Astig back, indicating the posterior surface corneal astigmatism; and finally, Km back, which quantifies the mean posterior surface keratometry. Corneal thickness is evaluated through three parameters: Thinnest L, the thinnest corneal point; Pachy apex, the corneal apex thickness; and T-A, the difference in thickness between the thinnest point and the apex. Topographic indices further include the index of surface variance (ISV), index of vertical asymmetry (IVA), keratoconus index (KI), index of height asymmetry (IHA), index of height decentration (IHD), R min (the minimum corneal curvature radius), and the central keratoconus index (CKI).
Statistical Analysis
Descriptive statistics were expressed as mean, standard deviation, median, minimum, maximum, frequency, and ratio values. The distribution of variables was assessed using the Kolmogorov-Smirnov and Shapiro-Wilk tests. The Mann-Whitney U test was applied for the analysis of non-normally distributed quantitative independent data. The chi-square test was used for the analysis of qualitative independent data. SPSS version 28.0 was used for all analyses.
Ethical Approval
This study was approved by the Ethics Committee of Harran University Clinical Research Ethics Committee (Date: 2024-12-30, No:24.21.30).
Results
Four hundred eyes of 200 patients (pre and post- earthquake) were evaluated in this study. The patients’ mean age was 25.6 ± 11.5 years, BCVA was 0.87 ± 0.12, K1 front was 43.2 ± 2.4, K2 front was 45.4 ± 2.8, and Kmax front was 46.8 ± 3.7 diopters. In addition, 82.5% of the eyes were classified as subclinical keratoconus, 11.8% as stage I, 3.3% as stage II, and 2.5% as stage III. No stage 4 cases were seen. The patients’ demographic data and all evaluated corneal topographic parameters are presented in Table 1.Table 2 provides an overview of demographic characteristics, topographic parameter values, and corresponding statistical comparisons before and after the earthquake. Demographic and corneal topographic parameters were compared between pre- (100 patients, 200 eyes) and post-earthquake (100 patients, 200 eyes) groups. Mean ages were similar between the two groups (25.5 ± 11.9 vs. 25.7 ± 11.1 years, respectively, p=0.582). Gender distribution exhibited a non-significant trend towards more females in the post-earthquake group (57.0% vs. 65.0%, p=0.101). BCVA did not differ significantly between the groups (0.90 ± 0.14 vs. 0.84 ± 0.10, respectively, p=0.240). In terms of keratoconus staging, the distributions of subclinical (86.0% vs. 79.0%), stage I (10.0% vs. 13.5%), stage II (3.0% vs. 3.5%), and stage III (1.0% vs. 4.0%) cases were not significantly different between groups (p=0.150). The pre-earthquake mean spherical autorefraction was –3.11 ± 1.32 diopters, compared to –3.25 ± 1.30 diopters post-earthquake. A parametric test revealed no significant difference (p = 0.450). Mean cylindrical values were –2.29 ± 1.09 diopters pre-earthquake and –2.35 ± 1.09 diopters post-earthquake. The difference was also not statistically significant (p = 0.740).In terms of the corneal topographic parameters, several indices exhibited statistically significant differences between the pre- and post-earthquake groups. KI was significantly higher in the post-earthquake group (1.05 ± 0.07 vs 1.03 ± 0.09, p=0.002). Similarly, significant IHD elevation was observed in the post-earthquake group (0.02 ± 0.03 vs. 0.03 ± 0.05, p=0.001). CKI also increased significantly post-earthquake (1.01 ± 0.02 vs. 1.02 ± 0.02, p=0.001). Other parameters including K1 front (42.9 ± 2.1 vs. 43.4 ± 2.7, p=0.098), K2 front (45.3 ± 2.4 vs. 45.6 ± 3.1, p=0.526), Kmax front (46.6 ± 3.4 vs. 46.9 ± 4.1, p=0.426), and corneal thickness at the thinnest location (526.7 ± 47.4 vs. 518.9 ± 60.3, p=0.292) exhibited no statistically significant differences between the two groups. The difference between maximum keratometry and steep keratometry (Kmax-K2) exhibited a borderline significant increase in the post-earthquake group (1.26 ± 1.66 vs. 1.38 ± 1.54, p=0.050), as also did IVA (0.21 ± 0.22 vs. 0.26 ± 0.30, p=0.050).
Discussion
This study evaluated the prevalence of keratoconus and corneal topographic changes following the 2023 Turkish earthquake. It also highlights the lack of research directly investigating the effects of natural disasters such as earthquakes on the cornea. From that perspective, the study findings provide significant clues regarding the potential impact of the earthquake on the development and progression of keratoconus. Although the overall pre-earthquake clinical prevalence of keratoconus was 14.0% (n = 28), this rose to 21.0% (n = 42) after it. While there was no statistically significant difference in the frequency of keratoconus, a partial increase was observed, indicating a tendency for the prevalence to rise in the post-earthquake period. Increases were observed in each stage of keratoconus, a significant increase being observed in the advanced stage. Moreover, in the post-earthquake period, parameters such as KI, IHD, and CKI also increased significantly. This suggests that stress and corneal trauma mechanisms emerging following the earthquake may play a role in the pathogenesis and progression of keratoconus.No significant changes were observed in the majority of topographic parameters, such as K1, K2, Kmax, and corneal thickness. However, increases were determined in indices reflecting central and surface asymmetry (KI, IHD, CKI, and Kmax-K2). This finding suggests that the disease progression in the post-earthquake period is manifested more in the form of microstructural and asymmetric changes, highlighting the importance of early diagnosis and intervention strategies.Natural disasters, and especially earthquakes, can cause long-term psychophysiological effects in addition to acute physical trauma. Such events can lead to increased levels of systemic inflammatory mediators and oxidative stress markers, resulting in changes in corneal tissue, with its known sensitivity to redox homeostasis and inflammatory processes [7]. Dysregulation of inflammatory cascades may play an important role in the progression of corneal ectasias such as keratoconus and the initiation of corneal degenerative processes. Keratoconus is reported to be associated not only with genetic factors, but also with environmental (perceptible levels of air pollution, fine particles, and dusty environments) and mechanical (such as eye rubbing) factors.Environmental factors emerging after earthquakes can also have adverse effects on eye health. Hsu et al. reported significant increases in SO₂ and NO concentrations compared to background levels following earthquakes in Taiwan [8]. This increase was associated with underground gas emissions caused by the earthquake, along with traditional sources of air pollution, indicating a sudden, potentially hazardous deterioration in air quality.Dust and particles from collapsed buildings cause significant decreases in air quality. Zanoletti et al. investigated environmental pollution resulting from collapsed buildings after the 2023 earthquake in Türkiye. That research emphasized that building collapses and demolition caused significant increases in PM2.5 levels in particular. Those authors also concluded that hazardous materials such as asbestos, lead, and silica may enter the air, potentially causing severe health problems including respiratory issues, neurotoxicity, immunotoxicity, skin and eye irritation, and liver and kidney damage [9]. Lu et al. noted that particles, especially PM2.5, reduced the viability and proliferation of conjunctival epithelial cells while increasing apoptosis and IL-6 expression, significantly contributing to the development and exacerbation of allergic conjunctivitis [10]. This can in turn trigger increased eye rubbing behavior, leading to heightened mechanical trauma to the cornea, and thus laying the foundation for the progression of corneal disorders such as keratoconus. Previous studies have concluded that eye rubbing is an important risk factor in the development and progression of keratoconus [11].Jurkiewicz et al. demonstrated a significant positive correlation between the prevalence of keratoconus and levels of fine particulate matter (PM2.5) and nitrogen dioxide (NO₂) pollution. Those authors also suggested that fine particles may have a direct effect on corneal structures, potentially increasing apoptosis and leading to the progression of keratoconus. Significant positive correlations were also observed between the prevalence of keratoconus and air pollution levels in different countries. Keratoconus may therefore be associated not only with genetic factors, but also with environmental factors as part of this multifactorial etiology [12,13]. The prevalence of keratoconus has been reported to be directly proportional to the amount of particulate matter. It is thought that particulate matter, in addition to causing atopy and eye itching, may also directly affect the corneal epithelium and stroma [14,15].
Limitation
There are a number of limitations to this study that need to be considered. In particular, its retrospective design and single-center character may restrict the generalizability of the findings. Additionally, psychological stress and other environmental factors that may emerge in the post-earthquake period could not be assessed objectively, and the results therefore require cautious interpretation from that perspective. Further multicenter and long-term follow-up studies, along with analyses of psychological and environmental factors, will contribute to a more comprehensive understanding of the effects of natural disasters such as earthquakes on the progression of keratoconus. Because the study was retrospective, we couldn’t prove the extent of psychological stress experienced by individuals and the extent to which they were affected by physiological conditions. However, earthquakes have been shown to be a significant source of stress [7]. Perhaps demonstrating increased inflammatory cytokines and stress hormones in the affected population would have allowed us to establish a more meaningful relationship. However, our most significant limitation was not being able to conduct the study with the same patient groups. This was because our clinic is not a top-tier clinic that treats keratoconus. Keratoconus is diagnosed by specialist ophthalmologists at our clinic, and patients are referred to a top-tier center for treatment.
Conclusion
This study evaluated the effects of the 2023 Turkish earthquake on the prevalence of keratoconus and corneal topographic parameters. Although not statistically significant, the findings demonstrated an increase in keratoconus rates in the post-earthquake period, together with significant changes in indices reflecting central and surface asymmetry. These findings support the potential role of earthquake-related environmental and psychological factors in the progression of keratoconus and will serve as a useful basis for future large-scale studies.
Scientific Responsibility Statement
The authors declare that they are responsible for the article’s scientific content including study design, data collection, analysis and interpretation, writing, some of the main line, or all of the preparation and scientific review of the contents and approval of the final version of the article.
Animal and Human Rights Statement
All procedures performed in this study were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki Declaration and its later amendments or compareable ethical standards.
Funding: None
Conflict of Interest
The authors declare that there is no conflict of interest.
References
1. Davidson AE, Hayes S, Hardcastle AJ. The pathogenesis of keratoconus. Eye (Lond). 2014;28(2):189-95.
2. Bykhovskaya Y, Rabinowitz YS. Update on the genetics of keratoconus. Exp Eye Res. 2021;202(1):108398.
3. Wojcik KA, Kaminska A, Blasiak J. Oxidative stress in the pathogenesis of keratoconus and Fuchs endothelial corneal dystrophy. Int J Mol Sci. 2013;14(9):19294-308.
4. de Azevedo Magalhães O, Gonçalves MC, Gatinel D. The role of environment in the pathogenesis of keratoconus. Curr Opin Ophthalmol. 2021;32(4):379-84.
5. Roberts CJ, Knoll KM, Mahmoud AM. Corneal stress distribution evolves from thickness-driven in normal corneas to curvature-driven with progression in keratoconus. Ophthalmol Sci. 2024;4(2):100373.
6. Padmanabhan P, Lopes BT, Eliasy A. In vivo biomechanical changes associated with keratoconus progression. Curr Eye Res. 2022;47(7):982-6.
7. Navel V, Malecaze J, Pereira B, et al. Oxidative and antioxidative stress markers in keratoconus: A systematic review and meta-analysis. Acta Ophthalmol. 2021;99(6):e777-94.
8. Hsu SC, Huang YT, Huang JC, et al. Evaluating real-time air-quality data as earthquake indicator. Sci Total Environ. 2010;408(11):2299-304.
9. Zanoletti A, Bontempi E. The impacts of earthquakes on air pollution and strategies for mitigation: a case study of Turkey. Environ Sci Pollut Res. 2024;31(16):24662-72.
10. Lu CW, Fu J, Liu XF, et al. Air pollution and meteorological conditions significantly contribute to the worsening of allergic conjunctivitis: a regional 20-city, 5-year study in Northeast China. Light Sci Appl. 2021;10(1):190.
11. Sahebjada S, Al-Mahrouqi HH, Moshegov S, et al. Eye rubbing in the aetiology of keratoconus: a systematic review and meta-analysis. Graefes Arch Clin Exp Ophthalmol. 2021;259(8):2057-67.
12. Crawford AZ, Zhang J, Gokul A. The enigma of environmental factors in keratoconus. Asia Pac J Ophthalmol (Phila). 2020;9(6):549-56.
13. Jurkiewicz T, Marty AS. Air pollution and the prevalence of keratoconus: is there a connection? Ophthalmic Epidemiol. 2025;32(4):394-402.
14. Jurkiewicz T, Marty AS. Correlation between keratoconus and pollution. Ophthalmic Epidemiol. 2021;28(6):495-501.
15. Das AV, Basu S. Environmental and air pollution factors affecting allergic eye disease in children and adolescents in India. Int J Environ Res Public Health. 2021;18(11):5611.
Download attachments: 10.4328.ACAM.22822
Mübeccel Bulut, Ali Hakim Reyhan. An evaluation of the prevalence of keratoconus and corneal topographic alterations following the 2023 Turkish earthquake . Ann Clin Anal Med 2025;16(9):658-662
Citations in Google Scholar: Google Scholar
This work is licensed under a Creative Commons Attribution 4.0 International License. To view a copy of the license, visit https://creativecommons.org/licenses/by-nc/4.0/
Assessment of pediatric patients with negative RT-PCR and CT-based COVID-19 diagnosis referred to ICU
Bensu Bulut 1, Murat Genç 2, Medine Akkan Öz 1, Ayşenur Gür 3, Ramazan Kocaaslan 4, Dilek Atik 5, Ramiz Yazıcı 6, Hüseyin Mutlu 7
1 Department of Emergency Medicine, Health Sciences University, Gulhane Training and Research Hospital, Ankara, 2 Department of Emergency Medicine, Ankara Training and Research Hospital, Ankara, 3 Department of Emergency Medicine, Etimesgut Şehit Sait Ertürk State Hospital, Ankara, 4 Department of Urology, Faculty of Medicine, Kafkas University, Kars, 5 Department of Emergency Medicine, Faculty of Medicine, Karamanoğlu Mehmetbey University, Karaman, 6 Department of Emergency Medicine, Kanuni Sultan Suleyman Training and Research Hospital, İstanbul, 7 Department of Emergency Medicine, Faculty of Medicine, Aksaray University, Aksaray, Turkiye
DOI: 10.4328/ACAM.22826 Received: 2025-07-26 Accepted: 2025-08-25 Published Online: 2025-08-30 Printed: 2025-09-01 Ann Clin Anal Med 2025;16(9):663-667
Corresponding Author: Ayşenur Gür, Department of Emergency, Etimesgut Şehit Sait Ertürk State Hospital, Ankara, Turkiye. E-mail: draysenurcakici@gmail.com P: +90 553 775 95 21 Corresponding Author ORCID ID: https://orcid.org/0000-0002-9521-1120
Other Authors ORCID ID: Bensu Bulut, https://orcid.org/0000-0002-5629-3143 . Murat Genç, https://orcid.org/0000-0003-3407-1942 . Medine Akkan Öz, https://orcid.org/0000-0002-6320-9667 . Ramazan Koca, https://orcid.org/0000-0003-1944-7059 . Dilek Atik, https://orcid.org/0000-0002-3270-8711 . Ramiz Yazıcı, https://orcid.org/0000-0001-9210-914X . Hüseyin Mutlu, https://orcid.org/0000-0002-1930-3293
This study was approved by the Ethics Committee of Dr. Abdurrahman Yurtaslan Education and Research Hospital (Date: 2021-01-13, No: 2020/07.727).
Aim: Despite negative RT-PCR results, some pediatric patients show COVID-19 findings on thoracic CT imaging. This study aimed to evaluate laboratory parameters and clinical characteristics of RT-PCR-negative pediatric patients diagnosed with COVID-19 based on thoracic CT findings and referred to the intensive care unit.
Materials and Methods: This retrospective study included 214 patients under 18 years presenting to the pediatric emergency department between March 15 and June 15, 2020. Patients with two negative RT-PCR tests but COVID-19 suspicion on thoracic CT requiring intensive care were enrolled. Patients were categorized into four age groups (0-2, 3-6, 7-11, 12-18 years) and classified as mild and severe hypoxemic patients. Laboratory parameters, including complete blood count, C-reactive protein, and D-dimer, were analyzed.
Results: Mean ages were 1.3±0.5, 4.3±1, 9.4±1.4, and 14.6±1.6 years for respective age groups, with 57.9% males. Over half of patients in younger age groups were asymptomatic (57.1% in 0-2 years, 58.1% in 3-6 years). Significant differences were found between age groups in white blood cell count, lymphocyte, platelet, and red cell distribution width levels (p<0.05). The 0-2 age group showed higher lymphocyte (5.3±3.7) and platelet (387±202) counts compared to older groups. D-dimer and mean platelet volume levels differed significantly between symptomatic and asymptomatic patients (p<0.05). A moderate negative correlation was found between clinical presentation and D-dimer levels (rs: -0.342, p<0.05).
Discussion: Age-related variations in laboratory parameters suggest different immune responses across pediatric age groups in COVID-19. D-dimer and MPV may serve as potential biomarkers for disease severity assessment in RT-PCR-negative pediatric COVID-19 patients.
Keywords: COVID-19, pediatric, RT-PCR-negative, laboratory parameters, intensive care
Introduction
Coronavirus disease 2019 (COVID-19) is an infectious disease caused by severe acute respiratory syndrome coronavirus-2 (SARS-CoV-2) and was first described in December 2019 in the Wuhan province of China [1]. This disease, declared a pandemic by the World Health Organization (WHO) on 11 March 2020, has become a serious public health issue, affecting millions of people worldwide [2]. Although most people, including children, are susceptible to SARS-CoV-2, the progress of the disease is usually milder in the pediatric population than in adults . In an epidemiological study conducted in China, it was reported that more than 90% of COVID-19 cases in children were mild or moderate, hypoxemia was less than in adults, and the rate of developing critical illness was only 0.6% [3]. Definitive diagnosis of COVID-19 is made by identifying the viral RNA of SARS-CoV-2 using real-time reverse transcription polymerase chain reaction (RT-PCR) in nasopharyngeal and oropharyngeal swab samples [4]. However, RT-PCR has its limitations. The sensitivity of the test can be between %60 and %70, and it can provide false negative results, particularly in the early stages of the disease or when the viral load is low [5]. It has been reported that the rate of positive RT-PCR tests changes after hospitalization, and that the false negative rate can be high, particularly in the early stages [6]. This demonstrates that in the presence of clinical doubt, radiologic images play a supportive role in diagnosis. Due to these diagnostic challenges, alternative approaches that can assist in the early diagnosis of COVID-19 are needed. Routine laboratory parameters stand out as an easily accessible, fast-acting, and cost-effective method. In adult patients, lymphopenia, thrombocytopenia, elevated C-reactive protein (CRP), elevated lactate dehydrogenase (LDH), and elevated D-dimer levels have been associated with disease severity and prognosis [7,8]. The role laboratory parameters play in diagnosing COVID-19 in the pediatric population has still not been fully understood. In children, it is difficult to interpret laboratory findings due to developmental characteristics of the immune system, age-related physiological changes, and the differences in response to the disease compared to adults [9]. It has been reported that lymphopenia and elevated CRP levels are less common in pediatric COVID-19 patients than in adults [10]. This highlights the need for developing specific diagnostic algorithms for the pediatric population. The risk factors for developing critical illness in children are not yet fully understood. However, among the 345 confirmed pediatric COVID-19 cases with no missing data on underlying conditions, the most frequently reported underlying conditions were chronic pulmonary disease (11.6%), cardiovascular disease (7.2%), and immunosuppression (2.9%) [11]. Additionally, fever, cough, shortness of breath, and dyspnoea have been the most frequently reported clinical findings, respectively, in the United States and China [11,12]. The challenges faced in diagnosing COVID-19 in children are even more pronounced in patients with a negative RT-PCR test but have clinical and radiological findings suggestive of the disease. In the meta-analysis of Mantovani et al., it has been noted that COVID-19 presents a heterogeneous clinical spectrum in children and adolescents and that diagnostic approaches should take this heterogeneity into account [13]. In this context, the role of laboratory parameters in assessing disease severity and whether or not these parameters differed according to age groups is of critical importance. The aim of this study is to systematically review the demographic characteristics, clinical findings, laboratory parameters, and complaints upon presentation of pediatric patients referred to the intensive care unit (ICU) from the emergency department (ED) with a preliminary COVID-19 diagnosis based on thoracic computerised tomography (CT) findings, despite a negative RT-PCR test. Our study aims to obtain data that will guide clinicians in the management of pediatric COVID-19 by analyzing the differences in laboratory parameters between age groups and changes in biochemical markers between mild and severe hypoxic patients in this special patient group.
Materials and Methods
Study Design and Participants
This study was performed in the COVID-19 outpatient clinic of the Pediatric Emergency Department (PED) of the Ankara Yenimahalle Training and Research Hospital, between 15 March 2020 and 15 June 2020. Patients under the age of 18, who presented to the Ankara Yenimahalle Training and Research Hospital PED COVID-19 outpatient clinic, had two negative reverse transcription polymerase chain reaction (RT-PCR) tests, had COVID-19 suspicion based on thorax CT findings, and were referred due to the need for intensive care, were included in the study. Demographic data, chronic illnesses, clinical findings, and laboratory findings upon presentation to the PED and patient outcomes were reviewed individually and obtained retrospectively. Clinical findings of patients coming to the PED were defined as fever, dyspnea, sore throat, headache, cough, chest pain, abdominal pain, diarrhea, joint pain, and loss of taste or smell. All patients presenting to the COVID-19 PED of our hospital underwent a detailed physical examination, had their vital signs checked, and routinely had RT-PCR, complete blood count (CBC), biochemical parameters, and posteroanterior (PA) chest x-ray requested. Routine laboratory tests, including CBC parameters, were studied from the first blood sample obtained after presenting to the PED. In routine tests, arterial oxygen pressure (PaO2) lower than 80 mmHg or arterial oxygen saturation (SaO2) lower than 94% was evaluated as hypoxemia Metin girmek için buraya tıklayın veya dokunun.. Patients coming to the PED were divided into two groups: mild hypoxemia if SaO2 was between 90-93%, and severe hypoxemia if it was below 90% [14]. Patients under the age of 18, presenting to the COVID-19 PED, who had severe illness with fever, dyspnea and/or chest imaging congruent with SARS-CoV-2 pneumonia, or, new or increased need for oxygen and/or ventilation support; and critical illness, including respiratory failure requiring mechanical ventilation, acute respiratory distress syndrome, shock, systemic inflammatory response syndrome, and/or multiple organ failure, are assessed as requiring intensive care and are referred to the ICU. Patients were separated into four different age groups: 0-2 years (infancy), 3-6 years (early childhood), 7-11 years (middle childhood), and 12-18 years (adolescence). Patients under the age of 18, who had a negative RT-PCR test, had a COVID-19 diagnosis based on thoracic computerised tomography (TCT), were referred due to the need for intensive care, and whose data were complete were included in the study. Patients older than 18 years of age, who had a positive RT-PCR test, had no TCT, were diagnosed with COVID-19 without the need for intensive care, and any missing data from among the researched parameters were excluded from the study. No patient consent was required due to the retrospective design.
Statistical Analysis
All statistical data were analysed using SPSS version 20.0 for Windows. Descriptive statistics were used for the assessment of the patient demographics. Chi-square and Fisher’s exact tests were used to compare the rates of categorical variables. Numerical values of the study data were expressed as mean ± standard deviation and minimum- maximum values. The data obtained from the study conducted within the scope of the clinical research were statistically nonparametric in nature. Depending on whether the variables were categorical (nominal or ordinal) or numerical independent groups, the Kruskal-Wallis H test and Mann-Whitney U test were used in statistical evaluations. Parameters found to be meaningful were planned to be evaluated using Spearman’s correlation test. Results were evaluated at a significance level of p<0.05.
Ethical Approval
This study was approved by the Ethics Committee of Dr. Abdurrahman Yurtaslan Education and Research Hospital (Date: 2021-01-13, No: 2020/07.727).
Results
214 patients from among the 3268 patients under the age of 18 who presented to the COVID-19 outpatient clinic of our Emergency Department between the dates of our study were included in the study. The workflow of this study is shown in Figure 1. Mean age of Group 1 was 1,3±0,5 (n=49), Group 2 was 4,3±1 (n=31), Group 3 was 9,4±1,4 (n=38), and Group 4 was 14,6±1,6 (n=96), and %57,9 (n=124) of patients were male.
When patients were reviewed according to age groups, there were more patients in Group 4, and males were numerically dominant in all groups. There was no statistically significant difference in gender distribution between groups based on age (p>0.05). Review of patients based on clinical findings has revealed that although there is a high number of asymptomatic patients, there was no significant difference between age groups (p>0.05). Demographic data and clinical findings of patients are shown in Table 1. There were significant differences in laboratory parameters between age groups, including white blood cell (WBC), lymphocyte, platelet, and red cell distribution
width (RDW) (p<0.05). Although CRP, an acute phase reactant, levels were higher in the 0-2 age group than in other groups
the difference was not significant (p>0.05). Patients’ laboratory findings based on age groups are shown in Table 2. When patients were grouped according to mild and severe hypoxemia, D-dimer and MPV values were found to be significant at this distinction point, while other laboratory findings were not found to be statistically significant (>0.05) (Table 3). Evaluation of the correlation between clinical presentation and laboratory parameters in pediatric COVID-19 patients revealed a moderately significant negative relationship between clinical presentation and D-dimer levels and a weakly significant negative relationship between clinical presentation and MPV levels (p<0.05, rs: -0,342; p<0.05, rs: -0,175).
Discussion
During the COVID-19 pandemic, multiple studies researching the diagnostic value and utilization of laboratory parameters and clinical findings in early diagnosis and prognosis were published [15,16]. In our study, we evaluated the laboratory findings and clinical characteristics of 214 pediatric patients who received a COVID-19 diagnosis based on thoracic CT findings despite having negative RT-PCR tests and were referred due to the need for intensive care. The most important finding of our study was the significant difference in WBC, lymphocyte, platelet, and RDW levels between age groups. We also found that more than half of the patients included in the study (57.1% in the 0-2 age group, 58.1% in the 3-6 age group) presented with mild hypoxemia, and the most common symptoms were fever and cough.In their study evaluating 171 pediatric COVID-19 patients, Lu et al. reported that 15.7% of patients had mild disease and no hypoxemia [17], while Dong et al. reported this rate as 12.9% in their large series of 731 cases [18]. In a systematic review, de Souza et al. examined a total of 1124 pediatric COVID-19 cases from 38 studies and reported that 14.2% of patients had 36.3% mild disease, 46% moderate disease, 2.1% severe disease, and 1.2% critical disease [3]. The higher rate of mildly hypoxic patients in our study (between 47.4% and 57.1%) may be explained by the fact that our hospital is a regional pandemic center and that screening tests are widely used.The most common symptoms reported by de Souza et al. were fever (%47,5), cough (%41,5), and nasal symptoms (%11,2); Sediki et al. reported them as fever (%73), cough (%54), and shortness of breath (%36) [12]. Similarly, fever, cough, shortness of breath, and dyspnoea were the most common symptoms of children hospitalised in the USA and China [19,20]. Likewise, our study found that fever and cough were the main symptoms.
Age-related changes in laboratory parameters reflect the heterogeneous nature of the pediatric population. While Guan et al. reported the rate of lymphopenia to be over %80 in their study with adult COVID-19 patients [21]. This rate is much lower in children. In a study conducted by Wang et al. on 31 pediatric patients, the lymphopenia rate was only %12,9 [22]. The significantly higher average lymphocyte count (5,3±3,7) of the 0-2 age group compared to the other age groups in our study suggests that immune response is different in young children. CRP levels being higher in the 0-2 age group (214±59,8) than in other groups, although not statistically significant, is clinically significant. Chen et al. demonstrated that elevated CRP is related to disease severity in pediatric COVID-19 patients [21]. In their meta-analysis, Lippi et al. also highlighted that procalcitonin and CRP levels increased significantly in the presence of bacterial coinfection [22]. The elevated CRP levels seen in the younger age group in our study suggest that the risk of bacterial coinfection might be higher in this age group.
Varying platelet counts in different age groups in pediatric COVID-19 patients is a remarkable finding. In their study conducted in China, Wang et al. reported that the thrombocytopenia rate was %36 in children with COVID-19 infection, and that this condition is even more pronounced in severe cases [23]. Similarly, the meta-analysis of Lippi et al. demonstrated that thrombocytopenia was observed in %57,7 of patients with severe COVID-19 infection and that this rate decreased to %31,6 in patients with mild symptoms [22]. Our study found higher platelet counts in the 0-2 age group than in the other age groups, suggesting that the inflammatory response might be different in young children. The study conducted in Wuhan by Huang et al. has similarly revealed a significant correlation between age and platelet counts [7].
The significant differences in RDW between age groups indicate that this parameter may have prognostic value in pediatric COVID-19 patients. The study by Chen et al. reported that RDW levels were associated with disease severity in COVID-19 patients and that high RDW levels might be a predictor of poor prognosis [21]. In their wide series comprising 1099 patients, Guan et al. demonstrated that hematologic parameters of COVID-19 patients changed with age and suggested that this may be associated with the age-related changes of the immune system [19]. The negative correlation between D-dimer and clinical presentation detected in our study was consistent with the findings reported by Bhuiyan et al. in their systematic review of children under the age of 5 [24]. In this study, it was shown that coagulation parameters were more stable in mildly hypoxic children, and D-dimer elevation was associated with severe hypoxemia.In our study, the significant difference in MPV value between mild and severe hypoxemic patients suggests that this parameter may be a potential marker in assessing disease severity. The study performed by Qin et al. on COVID-19 patients in Wuhan suggested that immune response dysregulation might affect thrombocyte function, and this might cause changes in MPV levels [25]. In their meta-analysis, Mantovani et al. reported that COVID-19 infection is usually mild in children and adolescents; however, changes in laboratory parameters might be important in predicting disease severity [13]. Our findings are consistent with these data from the literature, and the high lymphocyte and platelet counts observed in the 0-2 age group in particular might be reflective of the hyper-reactivity of the immune system seen in this age group.
Strengths
One of the strengths of our study is that it is one of the rare studies researching RT-PCR-negative pediatric patients diagnosed by thoracic CT.
Limitation
Our study has its limitations. First is the single-centred and retrospective study design. Due to the retrospective design, data such as the interval between onset of symptoms and hospital admission, levels of viral load, and follow-up results are missing. Another limitation is that healthy children in the control group were not serologically evaluated for asymptomatic COVID-19 infection.
Conclusion
In conclusion, our study demonstrated that there are significant differences in laboratory parameters between age groups of patients who tested negative on RT-PCR, received a COVID-19 diagnosis based on thoracic CT findings, and were referred to the intensive care unit. Particularly, the age-related changes in WBC, lymphocyte, platelet, and RDW counts suggest that immune response changes with age in pediatric COVID-19 cases. The significant differences in D-dimer and MPV levels between mild and severe hypoxemic patients indicate that these parameters can be potential biomarkers in evaluating disease severity. These findings emphasize that age-group-specific assessment protocols must be developed and that laboratory parameters must be better utilised in clinical decision-making processes in pediatric COVID-19 patients. Future multi-centred prospective studies are required to confirm these findings and better understand prognostic factors in pediatric COVID-19.
Scientific Responsibility Statement
The authors declare that they are responsible for the article’s scientific content including study design, data collection, analysis and interpretation, writing, some of the main line, or all of the preparation and scientific review of the contents and approval of the final version of the article.
Animal and Human Rights Statement
All procedures performed in this study were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki Declaration and its later amendments or compareable ethical standards.
Funding: None
Conflict of Interest
The authors declare that there is no conflict of interest.
References
1. Zhou M, Zhang X, Qu J. Coronavirus disease 2019 (COVID-19): a clinical update. Front Med. 2020;14(2):126-35.
2. Xu J, Chen Y, Chen H. 2019 novel coronavirus outbreak: a quiz or final exam? Front Med. 2020;14(2):225-8.
3. Dong Y, Mo X, Hu Y, et al. Epidemiology of COVID-19 among children in China. Pediatrics. 2020;145(6):e20200702.
4. Kucirka LM, Lauer SA, Laeyendecker O. Variation in false-negative rate of reverse transcriptase polymerase chain reaction-based SARS-CoV-2 tests by time since exposure. Ann Intern Med. 2020;173(4):262-7.
5. Vandenberg O, Martiny D, Rochas O. Considerations for diagnostic COVID-19 tests. Nat Rev Microbiol. 2021;19(3):171-83.
6. Liu R, Han H, Liu F, et al. Positive rate of RT-PCR detection of SARS-CoV-2 infection in 4880 cases from one hospital in Wuhan, China, from Jan to Feb 2020. Clin Chim Acta. 2020;505:172-5.
7. Huang C, Wang Y, Li X, et al. Clinical features of patients infected with 2019 novel coronavirus in Wuhan, China. Lancet. 2020;395(10223):497-506.
8. Zhou F, Yu T, Du R, et al. Clinical course and risk factors for mortality of adult inpatients with COVID-19 in Wuhan, China: a retrospective cohort study. Lancet. 2020;395(10229):1054-62.
9. Simon AK, Hollander GA, McMichael A. Evolution of the immune system in humans from infancy to old age. Proc Biol Sci. 2015;282(1821):20143085.
10. Henry BM, Benoit SW, de Oliveira MHS, et al. Laboratory abnormalities in children with mild and severe coronavirus disease 2019 (COVID-19): a pooled analysis and review. Clin Biochem. 2020;81:1-8.
11. Sedighi I, Fahimzad A, Pak N, et al. A multicenter retrospective study of clinical features, laboratory characteristics, and outcomes of 166 hospitalized children with coronavirus disease 2019 (COVID-19): a preliminary report from Iranian Network for Research in Viral Diseases (INRVD). Pediatr pulmonol. 2022;57(2):498-507.
12. CDC Covid-19 Response Team. Coronavirus disease 2019 in children-United States, February 12–April 2, 2020. MMWR Morb Mortal Wkly Rep. 2020;69(14):422-6.
13. Mantovani A, Rinaldi E, Zusi C. Coronavirus disease 2019 (COVID-19) in children and/or adolescents: a meta-analysis. Pediatr Res. 2021;89(4):733-7.
14. Mushumbamwiza H, Webster HH, Kayitesi C. Hypoxemia detection and oxygen therapy practices in neonatal and pediatric wards across seven district and referral hospitals in Rwanda. Front Pediatr. 2025;13:1526779.
15. Henry BM, Lippi G, Plebani M. Laboratory abnormalities in children with novel coronavirus disease 2019. Clin Chem Lab Med. 2020;58(7):1135-8.
16. Demir A, Gümüş H, Kazanasmaz H. An examination of the laboratory data of pediatric COVID-19 cases. IJCMBS. 2021;1(2):33-7.
17. Lu X, Zhang L, Du H, et al. SARS-CoV-2 infection in children. N Engl J Med. 2020;382(17):1663-5.
18. de Souza TH, Nadal JA, Nogueira RJN. Clinical manifestations of children with COVID-19: a systematic review. Pediatr Pulmonol. 2020;55(8):1892-9.
19. Guan WJ, Ni ZY, Hu Y, et al. Clinical characteristics of coronavirus disease 2019 in China. N Engl J Med. 2020;382(18):1708-20.
20. Wang Y, Zhu F, Wang C, et al. Children hospitalized with severe COVID-19 in Wuhan. Pediatr Infect Dis J. 2020;39(7):e91-e94.
21. Chen N, Zhou M, Dong X, et al. Epidemiological and clinical characteristics of 99 cases of 2019 novel coronavirus pneumonia in Wuhan, China: a descriptive study. Lancet. 2020;395(10223):507-13.
22. Lippi G, Plebani M. Procalcitonin in patients with severe coronavirus disease 2019 (COVID-19): a meta-analysis. Clin Chim Acta. 2020;505:190-1.
23. Wang D, Ju XL, Xie F, et al. Clinical analysis of 31 cases of 2019 novel coronavirus infection in children from six provinces (autonomous region) of northern China. Zhonghua Er Ke Za Zhi. 2020;58(4):269-74.
24. Bhuiyan MU, Stiboy E, Hassan MZ, et al. Epidemiology of COVID-19 infection in young children under five years: a systematic review and meta-analysis. Vaccine. 2021;39(4):667-77.
25. Qin C, Zhou L, Hu Z, et al. Dysregulation of immune response in patients with Coronavirus 2019 (COVID-19) in Wuhan, China. Clin Infect Dis. 2020;71(15):762-8.
Download attachments: 10.4328.ACAM.22826
Bensu Bulut, Murat Genç, Medine Akkan Öz, Ayşenur Gür, Ramazan Kocaaslan, Dilek Atik, Ramiz Yazıcı, Hüseyin Mutlu.. Assessment of pediatric patients with negative RT-PCR and CT-based COVID-19 diagnosis referred to ICU. Ann Clin Anal Med 2025;16(9):663-667
Citations in Google Scholar: Google Scholar
This work is licensed under a Creative Commons Attribution 4.0 International License. To view a copy of the license, visit https://creativecommons.org/licenses/by-nc/4.0/
Low-concentration liquid phenol demonstrates similar therapeutic success to high-concentration and crystallized phenol in pilonidal sinus management
Melih Can Gül 1, Sinan Şener 2
1 Department of Gastroenterology Surgery, Afyonkarahisar State Hospital, Afyonkarahisar, 2 Department of General Surgery, Faculty of Medicine, Selçuk University, Konya, Türkiye
DOI: 10.4328/ACAM.22827 Received: 2025-07-28 Accepted: 2025-08-29 Published Online: 2025-08-31 Printed: 2025-09-01 Ann Clin Anal Med 2025;16(9):668-672
Corresponding Author: Melih Can Gül, Department of Gastroenterology Surgery, Afyonkarahisar State Hospital, Afyonkarahisar, Türkiye. E-mail: opdrmelihcangul@gmail.com P: +90 535 651 24 02 Corresponding Author ORCID ID: https://orcid.org/0000-0002-6165-1144
Other Authors ORCID ID: Sinan Şener, https://orcid.org/0000-0003-0800-1166
This study was approved by the Ethics Committee of Afyonkarahisar Health Sciences University (Date: 2022-06-03, No: 2022/7)
Aim: Pilonidal sinus disease (PSD) is a common inflammatory condition in young adults. Crystallized phenol and 80% liquid phenol are widely used conservative options but may cause adverse effects such as skin and fat necrosis. Recently, low-concentration liquid phenol has been suggested as a safer alternative. This study aimed to compare the efficacy and safety of three phenol-based treatments.
Materials and Methods: This retrospective study included 126 patients with primary, uncomplicated PSD treated between October 2020 and May 2022. Patients were divided into three groups: crystallized phenol (Group A, n=42), 80% liquid phenol (Group B, n=43), and 40% diluted phenol (Group C, n=41). Groups were stratified by sinus count (<2 or ≥2). Baseline features (age, BMI, comorbidities, smoking, alcohol use) were similar. All procedures were performed under local anesthesia in an outpatient setting. Postoperative complications (infection, bleeding, fat and skin necrosis) were assessed on days 3, 7, and 21. Recurrence was monitored for 24 months. ANOVA and Chi-square tests were applied.
Results: Mean age was 28.7 years; 88.9% were male. Recurrence rates were comparable (A: 11.9%, B: 9.7%, C: 10.4%; p>0.05). Skin necrosis occurred in four patients in Group A, three in Group B, and none in Group C. Abscesses developed in three patients in Groups A and B, and one in Group C. No major bleeding was observed.
Discussion: Low-concentration phenol offered similar efficacy with fewer complications and may be a safe, effective option for PSD treatment.
Keywords: pilonidal sinus, phenol therapy, minimally invasive, recurrence, skin necrosis
Introduction
Pilonidal sinus disease (PSD) is a chronic inflammatory condition affecting the sacrococcygeal region, primarily in adolescents and young adults, with a marked male predominance and an estimated incidence of 26 per 100,000 population annually [1,2]. The etiology is multifactorial, involving local mechanical forces, hair insertion, and repeated trauma leading to sinus formation and secondary infection [3].
While wide excision has historically been the mainstay of treatment, it is associated with prolonged healing and high morbidity [4]. Conservative outpatient treatments using phenol have gained popularity due to shorter recovery times and reduced complication rates [5]. Crystallized phenol and 80% liquid phenol are the most widely used forms, but both can cause significant local tissue toxicity, including skin and fat necrosis [6].
Low-concentration liquid phenol has recently gained attention as a potentially safer alternative, though comprehensive comparative evidence remains limited [7]. Furthermore, studies directly comparing the efficacy and complication profiles of all three phenol formulations-crystallized, high-concentration, and low-concentration are scarce.
The present study aimed to compare the clinical outcomes and complication rates of these three phenol-based treatment modalities in patients with primary, non-complicated PSD, using standardized follow-up over a 24-month period.
Materials and Methods
Patient Selection and Group Allocation
This retrospective study included 126 adult patients with primary, non-complicated pilonidal sinus disease treated at a tertiary-level general surgery clinic between October 2020 and May 2022. Inclusion criteria were: age ≥18 years, absence of prior pilonidal surgery, and availability of 24-month clinical follow-up. Patients were excluded if they had recurrent disease, acute pilonidal abscess, anorectal comorbidities, or incomplete clinical data.
Patients were stratified based on the number of sinus openings into:
• Single-pit disease
• Multiple-pit disease
Following stratification, patients were allocated into three treatment groups:
• Group A (n=42): Crystallized phenol
• Group B (n=43): 80% liquid phenol
• Group C (n=41): 40% diluted phenol (prepared by mixing equal parts of 80% phenol and 0.9% saline)
All procedures were performed under local anesthesia in an outpatient setting.
Ethical approval was granted by the Clinical Research Ethics Committee of Afyonkarahisar Health Sciences University (Meeting No: 2022/7, Approval Date: 03/06/2022), and written informed consent was obtained from all participants.
Data Collection
Demographic and clinical data were recorded, including age, sex, body mass index (BMI), weight, smoking and alcohol use, comorbidities, and the number of sinus pits (classified as single-pit or multiple-pit disease). All data were prospectively collected using standardized outpatient clinic forms.
Follow-up assessments were conducted on postoperative days 3, 7, and 21, and at regular intervals for 24 months. Complications such as wound infection, abscess, fat necrosis, skin necrosis, and bleeding were documented. Recurrence was defined as the reappearance of symptoms or sinus openings after initial healing.
Phenol Application Procedure
Before the procedure, all patients were instructed to completely shave the sacrococcygeal region. Local anesthesia was administered with up to 5 cc of lidocaine. After mechanical debridement of hair and granulation tissue using mosquito forceps, a protective layer of nitrofurazone cream was applied around the sinus openings to prevent skin contact with phenol.
The application techniques for each phenol type were standardized according to the recommendations of Kayaalp and Tolan [8]:
• In Group A, small fragments of crystallized phenol were inserted directly into the sinus tract and retained for approximately five minutes, allowing slow liquefaction and diffusion while minimizing leakage into surrounding tissues.
• In Group B, 1 mL of 80% liquid phenol was instilled via a plastic IV cannula and retained in the cavity for five minutes before being drained.
• In Group C, 0.5 mL of 80% phenol was diluted with 0.5 mL of 0.9% saline to prepare a 1 mL 40% phenol solution, which was applied in the same manner and for the same duration as in Group B.
Following the application period, excess phenol was gently evacuated by external pressure, and the wound was left open without dressing. All procedures were performed in an outpatient setting.
Follow-up Process
Routine follow-up evaluations were conducted on days 3, 7, and 21 post-treatment, and then at monthly intervals for 24 months. Infections were treated with oral second-generation cephalosporins when necessary. Analgesics were prescribed only if clinically indicated.
Healing was defined as complete epithelialization of the sinus orifices without discharge or signs of inflammation. Recurrence and complications were systematically documented throughout the follow-up period.
Statistical Analysis
Continuous variables (e.g., age, BMI, weight) were assessed for normality using the Shapiro-Wilk test. Normally distributed variables were compared using one-way analysis of variance (ANOVA).
Categorical variables (e.g., gender, smoking, recurrence, complications) were analyzed using Pearson’s Chi-square test or Fisher’s Exact test when expected cell counts were <5. Statistical significance was set at p<0.05. When necessary, post-hoc comparisons were performed using the Tukey HSD test.
All analyses were performed using IBM SPSS Statistics version 27.0 (IBM Corp., Armonk, NY, USA).
Ethical Approval
This study was approved by the Clinical Research Ethics Committee of Afyonkarahisar Health Sciences University (Date: 2022-06-03, No: 2022/7). Written informed consent was obtained from all participants.
Results
A total of 126 patients with primary, non-complicated pilonidal sinus disease were included in the study. There were 42 patients in Group A (crystallized phenol), 43 in Group B (80% phenol), and 41 in Group C (low-concentration phenol). Among the 126 patients, 112 (88.9%) were male and 14 (11.1%) were female, with no significant gender difference across groups (p=0.782). Similarly, smoking status (p=0.609), alcohol use (p=0.691), and comorbidity rates (p=0.734) were comparable among the groups. Baseline demographic characteristics of the patients are summarized in Table 1.
Mean age was 28.7±5.3 years, with no significant difference among groups (Group A: 28.4±5.4, Group B: 28.9±5.1, Group C: 28.8±5.5; p=0.841). Mean BMI and body weight were also similar across groups (BMI: p=0.766; weight: p=0.712), supporting the demographic comparability across groups.
Postoperative outcomes and complication rates at 24 months are presented in Table 2. In terms of postoperative outcomes (Table 2), recurrence rates at 24 months were 11.9% in Group A, 9.7% in Group B, and 10.4% in Group C, with no statistically significant difference observed (p=0.923, Pearson chi-square). Skin necrosis was noted in 4 patients in Group A (9.5%) and 3 patients in Group B (7.0%), while no cases were detected in Group C (p=0.034). Fat necrosis occurred in 4 patients in Group A (9.5%), 5 in Group B (11.6%), and 2 in Group C (4.9%), but this was not statistically significant (p=0.447). Abscess formation was recorded in 3 patients each in Groups A and B and in 1 patient in Group C (p=0.541).
When stratified by pit count (<2 vs ≥2), recurrence and complication rates did not significantly differ between subgroups within each treatment arm (p>0.05 for all comparisons). There was no statistically significant interaction between pit number and treatment modality regarding recurrence (p=0.681) or complication rates.
No major bleeding events or unplanned readmissions occurred in any group throughout the follow-up period.
When stratified by pit count (<2 vs ≥2), recurrence and complication rates were descriptively higher in patients with ≥2 pits, particularly in Group A and Group B. However, no statistically significant differences were observed between the subgroups within each treatment arm (p>0.05 for all comparisons). Notably, skin necrosis was observed only in the ≥3 pit subgroups of Group A (4 patients) and Group B (2 patients), whereas no cases were recorded in Group C, regardless of pit count. Detailed postoperative outcomes stratified by pit count are shown in Table 3.
Discussion
Our study confirms the efficacy and safety of phenol-based minimally invasive approaches in treating pilonidal sinus disease (PSD), with comparable recurrence rates across all three phenol formulations and a markedly lower incidence of complications in the low-concentration phenol group. Phenol treatment has become increasingly favored over traditional excisional surgery due to its simplicity, outpatient applicability, and lower wound morbidity [1,4]. Surgical techniques such as the modified Limberg flap have demonstrated favorable outcomes in selected cases, although they typically require operating room conditions and longer recovery periods [9]. Comparative studies have shown that flap techniques such as Limberg and Karydakis can be effective but are often associated with higher surgical burden and longer convalescence [10]. This is further supported by a recent single-center experience using the modified Limberg flap, which demonstrated acceptable outcomes but with longer healing times and hospital-based requirements [11].
Recent meta-analyses demonstrate its non-inferiority to surgical excision regarding recurrence, and its superiority concerning recovery time and complication rates [12,13].
Crystallized phenol remains one of the most widely adopted techniques for PSD, offering high success rates with minimal intervention. Sakcak et al. reported a recurrence rate of 7.4% and low complication rates among 112 patients treated with crystallized phenol [6]. Similarly, Omarov et al. demonstrated a 96% success rate in a recent Turkish cohort, with recurrence associated with factors such as obesity and hirsutism [14]. Our Group A, which received crystallized phenol, showed a comparable recurrence rate of 11.9% and isolated, self-limited cases of fat necrosis, which aligns with previously published findings.
High-concentration (80%) liquid phenol remains a standard option, but concerns regarding local tissue toxicity have prompted the exploration of alternative strategies [5]. In our Group B, recurrence and overall efficacy were similar to Group A, but minor soft tissue complications, including skin necrosis (7.0%) and fat necrosis (11.6%), were slightly more common—consistent with previous reports [5,6].
Low-concentration phenol has recently been explored as a potentially safer alternative. Emiroglu et al. reported similar efficacy and fewer local side effects using diluted phenol (30%) [7]. In our Group C, recurrence was 10.4% with no cases of skin necrosis, suggesting a better safety profile without compromising effectiveness. This supports the increasing literature favoring low-concentration protocols in suitable patients [7,13].
A recent single-center retrospective study by Demir et al. compared crystallized and high-concentration liquid phenol applications in 80 patients with pilonidal sinus disease and found no significant difference in one-year success rates (90% vs. 95%). However, the liquid phenol group experienced a significantly higher rate of early complications (37.5% vs 15%; p=0.042), including skin maceration, cellulitis, and hematoma [15]. These findings align with our results, where high-concentration phenol (Group B) was associated with higher skin and fat necrosis rates compared to both crystallized (Group A) and low-concentration phenol (Group C). The absence of skin necrosis in Group C further supports the notion that reducing phenol concentration and adjusting delivery technique may minimize chemical injury to adjacent tissues.
Azizoglu et al. recently reported favorable outcomes using platelet-rich plasma (PRP) as an adjuvant to crystallized phenol in pediatric patients, demonstrating reduced healing time and recurrence [16]. Although promising, their findings are limited to pediatric populations, which restricts direct comparison with our adult cohort.
Silver nitrate has recently emerged as a promising minimally invasive alternative in the management of pilonidal sinus disease (PSD), particularly for patients who are not suitable candidates for phenol-based interventions. Kılcı et al. conducted a retrospective study involving patients treated with pit excision followed by silver nitrate application and reported a 78.6% complete healing rate at 12 months, with minimal adverse effects and short outpatient recovery times [17]. Similarly, Kanat et al. evaluated silver nitrate as a standalone mini-invasive therapy and found a 91.1% cure rate with a mean healing time of 15.6 days and a complication rate of 22.2%, demonstrating the practicality of silver nitrate in outpatient protocols [18]. In a comparative study, Taskın and Karabay directly assessed crystallized phenol versus silver nitrate and found similar recurrence rates at one-year follow-up (12.8% vs. 28.9%, respectively; p>0.05), although silver nitrate required more frequent applications. Interestingly, silver nitrate was associated with reduced postoperative pain and no observed cases of skin necrosis, which the authors attributed to its relatively milder caustic effect compared to phenol [19]. These findings align partially with our study, in which low-concentration liquid phenol achieved a recurrence rate of 10.4%—comparable to the success rates of both crystallized phenol and silver nitrate—while completely avoiding skin necrosis. Although our protocol did not incorporate silver nitrate, the favorable safety profile and efficacy of low-concentration phenol in our cohort suggest that it may offer similar advantages without the need for repeated applications. Taken together, these studies reinforce the evolving role of non-excisional chemical therapies in PSD and support individualized treatment planning based on tissue tolerance, patient comorbidities, and logistical feasibility.
Systematic reviews continue to confirm that phenol-based and other non-excisional techniques are effective and associated with lower morbidity compared to surgical excision. In the 2023 systematic review by Huurman et al., recurrence rates for methods such as pit picking (with adjunctive phenol, laser, or endoscopic techniques) ranged from 0 % to 29 %, and wound healing times varied between 3 and 47 days, depending on application frequency and technique [13]. Our protocol, which involved single-session outpatient phenolization and structured follow-up, falls well within this favorable range.
Stratification by pit number revealed no significant relationship with recurrence or complication rates in our series. This supports previous observations that pit count alone is not a reliable prognostic factor in minimally invasive PSD management [2].
Limitation
This study has several limitations. It is a single-center retrospective analysis with a modest sample size, which may limit external validity. Although data were prospectively collected, the absence of a surgical or silver nitrate control group restricts comparative interpretation. Additionally, our follow-up was limited to 24 months and may not fully reflect long-term recurrence trends.
Moreover, since all patients were discharged home on the day of the procedure, postoperative pain assessment using standardized scoring systems such as the visual analog scale (VAS) could not be performed, limiting our evaluation of immediate patient comfort.
Future multicenter randomized studies with extended follow-up are warranted to further clarify the relative merits of various phenol concentrations and alternative chemical agents.
Conclusion
Low-concentration phenol yielded comparable therapeutic success to high-concentration and crystallized phenol in the management of primary pilonidal sinus disease. The absence of skin necrosis and low complication rates in the low-concentration group underscore its clinical advantage, supporting its use as a safe and effective conservative treatment option in routine outpatient clinical practice.
Scientific Responsibility Statement
The authors declare that they are responsible for the article’s scientific content including study design, data collection, analysis and interpretation, writing, some of the main line, or all of the preparation and scientific review of the contents and approval of the final version of the article.
Animal and Human Rights Statement
All procedures performed in this study were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki Declaration and its later amendments or comparable ethical standards.
Funding: None
Conflict of Interest
The authors declare that there is no conflict of interest.
References
1. Gil LA, Deans KJ, Minneci PC. Management of pilonidal disease: a review. JAMA Surg. 2023;158(8):875-83.
2. Dainius E, Vaiciute MK, Parseliunas A. Surgical treatment of pilonidal disease-short-term follow up results of minimally invasive pit-picking surgery versus radical excision without suturing: a prospective randomised trial. Heliyon. 2024;10(11):e31497.
3. Chintapatla S, Safarani N, Kumar S. Sacrococcygeal pilonidal sinus: historical review, pathological insight and surgical options. Tech Coloproctol. 2003;7(1):3-8.
4. Al-Khamis A, McCallum I, King PM. Healing by primary versus secondary intention after surgical treatment for pilonidal sinus. Cochrane Database Syst Rev. 2010;2010(1):CD006213.
5. Kayaalp C, Aydin C. Review of phenol treatment in sacrococcygeal pilonidal disease. Tech Coloproctol. 2009;13(3):189-93.
6. Sakcak I, Avsar FM, Cosgun E. Comparison of the application of low concentration and 80% phenol solution in pilonidal sinus disease. JRSM Short Rep. 2010;1(1):5.
7. Emiroglu M, Karaali C, Salimoglu S. The effect of phenol concentration on the treatment of pilonidal sinus disease: early results of a prospective randomized study. Int Surg. 2016;101(3-4):127-32.
8. Kayaalp C, Tolan K. Crystallized or liquid phenol application in pilonidal sinus treatment. Indian J Surg. 2015;77(6):562-3.
9. Sabuncuoglu MZ, Sabuncuoglu A, Dandin O, et al. Eyedrop-shaped, modified Limberg transposition flap in the treatment of pilonidal sinus disease. Asian J Surg. 2015;38(3):161-7.
10. Erkent M, Şahiner İT, Bala M, et al. Comparison of primary midline closure, Limberg flap, and Karydakis flap techniques in pilonidal sinus surgery. Med Sci Monit. 2018;24:8959-63.
11. Uçaner B, Çimen Ş, Buldanlı MZ. Evaluation of single center clinical experience in patients undergoing modified Limberg flap technique in pilonidal sinus disease. J Med Palliat Care. 2023;4(6):694-8.
12. Grabowski J, Oyetunji TA, Goldin AB, et al. The management of pilonidal disease: a systematic review. J Pediatr Surg. 2019;54(11):2210-21.
13. Huurman EA, Galema HA, de Raaff CAL, Wijnhoven BPL, Toorenvliet BR, Smeenk RM. Non-excisional techniques for the treatment of intergluteal pilonidal sinus disease: a systematic review. Tech Coloproctol. 2023;27(12):1191-200.
14. Omarov N, Kaya M. Outcomes of crystallized phenol treatment in sacrococcygeal pilonidal sinus disease and factors affecting recurrence: A retrospective study. Turk J Colorectal Dis. 2024;34(1):7-12.
15. Demir M, Bölük SE, Sücüllü I. Efficacy of crystallized and liquid form phenol application in pilonidal sinus disease: a single-center retrospective study. Tech Coloproctol. 2025;29(1):109.
16. Azizoglu M, Klyuev S, Kamci TO. Platelet-rich plasma as an adjuvant therapy to crystallized phenol in the treatment of pediatric pilonidal sinus disease: A prospective randomized controlled trial. J Pediatr Surg. 2025;60(1):161934.
17. Kılcı B, Dalda Y, Şahin E. Pilonidal disease treatment with pit excision and silver nitrate: A retrospective study. Iran J Colorectal Res. 2024;12(1):6-10.
18. Kanat BH, Yazar FM, Kutluer N, et al. Use of silver nitrate application as mini-invasive treatment of pilonidal sinus disease. Chirurgia (Bucur). 2020;115(6):775-82.
19. Taskın AK, Karabay UA. Comparison of crystallised phenol and silver nitrate treatments in sacrococcygeal pilonidal sinus. Surg Chron. 2025;30(1):73-7.
Download attachments: 10.4328.ACAM.22827
Melih Can Gül, Sinan Şener. Low-concentration liquid phenol demonstrates similar therapeutic success to high-concentration and crystallized phenol in pilonidal sinus management. Ann Clin Anal Med 2025;16(9):668-672
Citations in Google Scholar: Google Scholar
This work is licensed under a Creative Commons Attribution 4.0 International License. To view a copy of the license, visit https://creativecommons.org/licenses/by-nc/4.0/
The impact of carotid artery cross-clamp duration on postoperative hypertension following endarterectomy
Güler Gülsen Ersoy
Department of Cardiovascular Surgery, Faculty of Medicine, Kastamonu University, Kastamonu, Turkey
DOI: 10.4328/ACAM.22830 Received: 2025-07-29 Accepted: 2025-08-29 Published Online: 2025-08-30 Printed: 2025-09-01 Ann Clin Anal Med 2025;16(9):673-677
Corresponding Author: Güler Gülsen Ersoy, Department of Cardiovascular Surgery, Faculty of Medicine, Kastamonu University, Kastamonu, Turkey. E-mail: gg.ersoy@hotmail.com P: +90 532 667 51 12 Corresponding Author ORCID ID: https://orcid.org/0000-0002-2000-3845
This study was approved by the Ethics Committee of Kastamonu University, Faculty of Medicine (Date: 2024-02-07, No: 22)
Aim: The incidence of carotid artery stenosis is increasing globally with an aging population. Carotid endarterectomy is a surgical treatment for severe carotid stenosis. However, hypertension, which frequently develops after carotid endarterectomy, significantly increases postoperative morbidity and mortality. While various causes of post-endarterectomy hypertension have been proposed, the exact mechanism remains unclear. This study aims to investigate whether the duration of carotid clamping influences the development of post-endarterectomy hypertension.
Materials and methods: 80 patients who underwent carotid endarterectomy between 2022 and 2024 were included in the study. Eight patients were women (10%), and 72 were men (90%). Their average age was 70.83. The average carotid artery clamp time during the operations was 13.8 minutes. Arterial hypertension values were measured and recorded 1 hour before and 1 hour after the surgery. The relationship between the cross-clamp duration applied to the internal, common, and external carotid arteries during carotid endarterectomy and the subsequent postoperative hypertension values was examined.
Results: In this study involving 80 patients who underwent carotid endarterectomy, a statistically significant relationship was found between the cross-clamp duration applied to the carotid artery and postoperative hypertension. Significant increases were observed in systolic and diastolic blood pressure values with the cross-clamp duration (p<0.001).
Discussion: The prolongation of cross-clamp time during carotid endarterectomy is associated with an increased degree of hypertension observed postoperatively. These findings suggest that efforts to minimize cross-clamp times could reduce the risk of hypertension and improve patient outcomes. Future studies should investigate the effects of different surgical techniques and patient management protocols on hypertension, providing additional data to prevent this complication.
Keywords: carotid endarterectomy, arterial hypertension, atherosclerosis
Introduction
Carotid artery disease has an essential place among atherosclerotic cardiovascular diseases. Carotid artery stenosis resulting from atherosclerosis is an important global problem in terms of mortality and morbidity in elderly societies. Severe carotid artery stenosis can lead to a decrease in cerebral blood flow, which can lead to severe complications such as ischemic and hemorrhagic stroke and, ultimately, death. Therefore, treatment should be initiated before these complications develop in severe carotid artery stenosis. Despite advances in medical and technological fields, open carotid endarterectomy remains the most frequently performed treatment for severe carotid artery stenosis [1]. This surgical procedure is the preferred treatment method and has high success rates. However, these surgeries still carry certain risks of complications. These complications can range from relatively mild conditions such as facial hypoesthesia, hoarseness, and facial asymmetry to more severe outcomes like stroke in serious cases [2]. A common complication observed after carotid endarterectomy is arterial hypertension. Like other complications, postoperative hypertension can also be severe. Various reasons have been suggested for the development of hypertension after carotid endarterectomy; however, the exact mechanisms of this hypertension are not fully understood [3]. Regardless of the etiology, postoperative hypertension and its severity are associated with increased rates of stroke and mortality [4]. Additionally, this hypertensive complication itself can lead to other complications, such as intracranial hemorrhage and stroke.
This study aimed to investigate the effect of cross-clamp duration on postoperative hypertension in patients who underwent carotid endarterectomy. For this purpose, systemic arterial blood pressure values were recorded 1 hour before and 1hour after surgery for 80 patients who underwent carotid endarterectomy between 2022 and 2024. The relationship between the cross-clamp duration applied to the carotid artery during carotid endarterectomy and postoperative systemic arterial blood pressure values was analyzed.
Materials and Methods
Eighty patients who underwent carotid endarterectomy between 2022 and 2024 were included in the study. This retrospective study was conducted by the principles of the Declaration of Helsinki. Medical records were reviewed from the hospital database for information regarding preoperative, intraoperative, and postoperative periods.
Among the patients, 8 were female (10%), and 72 were male (90%), with an average age of 70.83. The average cross-clamp time for the carotid artery during the surgeries was 13.8 minutes. Blood pressure values were recorded using a cuff on the same arm 1 hour before and 1 hour after surgery. Preoperative blood pressure measurements were taken on the left arm before the patients entered the operating room. For the study, postoperative blood pressure measurements were also taken on the same left arm. The relationship/correlation between the total cross-clamp duration applied to the internal, common, and external carotid arteries during the surgery and the systolic and diastolic arterial blood pressure values recorded one hour before and one hour after the surgery was examined for statistical significance.
Based on the assumption that postoperative hypertension after carotid endarterectomy mainly results from transient carotid sinus baroreceptor dysfunction, we analyzed only the first postoperative hour, when this effect is most evident. To avoid confounding from ICU antihypertensive therapy, blood pressure was measured invasively from the left radial artery with patients supine and head elevated 45°, all performed in the same ICU by a team with identical training.
Surgical technique
Patients included in the study had unilateral carotid artery stenosis of 70% or more. All surgeries were performed by the same surgical team with the classical carotid endarterectomy technique. All carotid endarterectomy procedures were conducted with general anesthesia, without the use of shunts, and employing primary closure techniques. All patients underwent standardized intraoperative hemodynamic management and anesthetic protocols, including the use of general anesthesia, continuous arterial pressure monitoring, and vasoactive agents as clinically indicated. The duration from applying the clamp to the internal carotid artery until its removal was defined as the total cross-clamp time. The study measured the average cross-clamp duration for carotid arteries in 80 cases at 13.8 minutes.
Statistical analysis
All statistical analyses were performed using SPSS 25 software (IBM SPSS Statistics, IBM Corporation, Chicago, IL). The normality of the data distribution was assessed using the Shapiro-Wilk test. Non-parametric tests were employed for data that did not meet the normal distribution. The Wilcoxon signed-rank test compared systolic and diastolic blood pressure values before and after surgery. The relationship between postoperative systolic and diastolic blood pressure and cross-clamp time was analyzed using the Spearman correlation test.
Additionally, the relationship between preoperative and postoperative blood pressure values and cross-clamp duration was examined through partial correlation analysis. A p-value of < 0.05 was considered statistically significant for all analyses. Tables and graphs supported results according to the level of significance.
Ethical Approval
This study was approved by the Ethics Committee of Kastamonu University, Faculty of Medicine (Date: 2024-02-07, No: 22).
Results
The systolic and diastolic blood pressure values before and after surgery were compared using the Wilcoxon test. The median systolic blood pressure values were 140 mmHg (110-160) preoperatively and 160 mmHg (130-230) postoperatively. The mean values increased from 140.5 ± 13.1 mmHg before surgery to 162.5 ± 24.2 mmHg after surgery. The p-value (<0.001) indicates that the increase in postoperative systolic blood pressure is statistically significant.
A similar increase was observed in diastolic blood pressure as well. The median diastolic blood pressure values were 90 mmHg (70-110) preoperatively and 100 mmHg (70-120) postoperatively. The mean values were 89.1 ± 10.3 mmHg before surgery and 100.1 ± 12.1 mmHg after surgery. The p-value (<0.001) indicates that the postoperative diastolic blood pressure increase is also statistically significant.
These results indicate a significant increase in systolic and diastolic blood pressure postoperatively. The median systolic blood pressure rose from 140 mmHg to 160 mmHg, and this difference is statistically significant (p < 0.001). The mean blood pressure values also showed a significant increase after surgery. The median diastolic blood pressure increased from 90 mmHg to 100 mmHg, which is also statistically significant (p < 0.001). Significant increases were observed in both systolic and diastolic blood pressure values and mean blood pressure values. The p-values obtained from the Wilcoxon test for both types of blood pressure were p < 0.001, indicating a statistically significant difference between preoperative and postoperative blood pressure values. These values are shown in Table 1.
The relationship between carotid cross-clamp duration and postoperative systolic and diastolic blood pressure values in patients undergoing carotid endarterectomy was found to be significant, with correlation coefficients of r = 0.795 (p < 0.001) for systolic blood pressure and r = 0.756 (p < 0.001) for diastolic blood pressure The relationship between the increased blood pressure values after surgery and the clamp duration during the operation was analyzed using the Spearman correlation. The Spearman correlation coefficient (r = 0.765) indicated a strong positive relationship between the duration of the clamp and the increase in postoperative systolic blood pressure compared to preoperative values. This suggests that the postoperative systolic blood pressure also tends to rise as the clamp duration increases. The p-value <0.001 demonstrates that this relationship is statistically highly significant. The Spearman correlation coefficient (r = 0.756) indicates a strong positive relationship between the duration of the clamp and postoperative diastolic blood pressure. This implies that the postoperative diastolic blood pressure also rises as the clamp duration increases. The p-value <0.001 demonstrates that this relationship is statistically significant. Both postoperative systolic and diastolic blood pressure values show a positive and significant relationship with the clamp duration. Notably, the correlation between systolic blood pressure and clamp time (r = 0.795) is higher than that between diastolic blood pressure and clamp time (r = 0.756).
Partial correlation analysis was conducted to examine the relationship between the clamp duration and preoperative and postoperative blood pressure values.The partial correlation coefficient for systolic blood pressure between preoperative and postoperative values was r = 0.392, which is statistically significant (p<0.001). This indicates a moderate positive relationship between preoperative and postoperative systolic blood pressure when controlling for clamp duration. For diastolic blood pressure, the partial correlation coefficient was r = 0.348, which is also statistically significant (p = 0.002). A positive relationship is observed between the clamp duration and preoperative and postoperative diastolic blood pressure. These results suggest that the clamp duration is related to postoperative blood pressure values, with the relationship being statistically stronger for systolic blood pressure than for diastolic blood pressure. Correlation analyses between clamp duration and blood pressure values shown in Table 2. In the multivariate logistic regression analysis, carotid clamp duration above the determined cut-off value was found to be a strong independent predictor of postoperative systolic hypertension (OR = 0.002, 95% CI 0.000–0.039, p < 0.001). Other variables, including age, hypertension, diabetes mellitus, chronic kidney disease, smoking status, coronary artery disease, preoperative systolic blood pressure, and antihypertensive drug use, were not significantly associated with postoperative systolic hypertension in the adjusted model. These data are shown in Table 3. Receiver operating characteristic (ROC) curve analysis was performed to evaluate the predictive value of clamp duration for postoperative hypertension. The analysis demonstrated excellent discrimination, with an area under the curve (AUC) of 0.930 (95% CI: 0.864–0.995, p<0.001). The optimal cut-off value for clamp duration, determined using the Youden index, was 12.5 minutes, yielding a sensitivity of 90.5% and a specificity of 89.5% for predicting clinically significant postoperative hypertension. The predictive value of clamp duration for postoperative hypertension is demonstrated in the ROC curve, as shown in Figure 1. Among the patients who developed postoperative hypertension, initial management was performed with intravenous nitroglycerin according to ICU protocols. In cases resistant to nitroglycerin, beta-blockers were administered based on the patient’s heart rhythm. Blood pressure was successfully controlled in all cases within the early postoperative period. No cases of intracranial hemorrhage, cerebral hyperperfusion syndrome, or perioperative stroke were observed. No in-hospital mortality occ.
Discussion
Carotid artery disease is a severe cardiovascular disease. Serious carotid artery stenosis, if left untreated, can lead to a reduction in cerebral blood flow, resulting in severe complications such as ischemic and hemorrhagic strokes, ultimately leading to death. Therefore, particularly in patients with stenosis of 70% or more, surgical interventions may be necessary. Surgical treatment options include open carotid endarterectomy and endovascular treatment methods to improve patients’ quality of life and reduce the risk of complications. The decision for surgical intervention should be based on the patient’s overall condition and the severity of stenosis, tailored to meet individual needs [1].
The surgical treatment of carotid artery disease was first described in 1953 by Michael DeBakey, who performed the first carotid endarterectomy technique [5]. Since then, carotid endarterectomy has remained one of the safest surgical methods today. However, various complications may occur following carotid artery surgeries [6]. These complications can range from mild facial numbness and temporary neurological symptoms to severe outcomes such as myocardial infarction, stroke, or even death [2]. Therefore, careful assessment and management of risks during such surgical procedures are critical.
Postoperative hypertension following carotid endarterectomy is one of the common complications associated with carotid endarterectomy. This hypertension has the potential to lead to additional complications, which can range from hematoma formation at the incision site to severe conditions such as cerebral hemorrhage [7,8]. Therefore, careful monitoring and management of postoperative hypertension are critical for improving patient outcomes.
Various mechanisms have been suggested regarding the causes of hypertension observed after carotid endarterectomy. One such mechanism is the impairment of cerebral autoregulation following carotid surgery. Anatomically, the baroreceptors located in the carotid bulb are known to have physiological effects on blood pressure and heart rate. Blood pressure regulation is maintained through reflexes initiated by stimuli from the glossopharyngeal neurons in the adventitia via the carotid sinus nerve (Hering’s nerve) to the upper brain [9]. Some studies have suggested that surgical manipulation of the carotid bulb alters baroreceptor function, triggering hypertension through humoral or neural mechanisms [10,11]. Bunag and colleagues conducted experiments in dogs and concluded that acute bilateral carotid occlusion leads to increased levels of renin and angiotensin. They found that acute cerebral ischemia stimulates the sympathetic nervous system, which promotes renal renin release, elevating angiotensin levels and resulting in hypertension [12].
Towne et al. have noted that postoperative hypertension is more common in patients who were hypertensive before surgery. In their study, it was highlighted that the likelihood of developing postoperative hypertension is more significant in patients with preoperative hypertension compared to those who were normotensive before the procedure [7].
Demirel and colleagues investigated the effect of the carotid endarterectomy technique on postoperative hypertension. In their study, they concluded that postoperative hypertension was more frequently observed after the eversion technique compared to classical endarterectomy [13]. However, Ben and colleagues demonstrated that postoperative hypertension following carotid endarterectomy is not related to the surgical technique [14]. Despite all these studies, the cause and mechanism of hypertension after carotid endarterectomy remain unclear [15].
In this study, ROC curve analysis was performed to evaluate the predictive value of clamping time for postoperative hypertension. According to the results, the optimal cutoff value for clamping time was 12.5 minutes, providing 90.5% sensitivity and 89.5% specificity in predicting clinically significant postoperative hypertension.
From a clinical perspective, maintaining clamp time below this threshold may substantially reduce the risk of postoperative hypertension and its associated complications, such as cerebral hyperperfusion syndrome, intracranial hemorrhage, and stroke.
All cases were performed using the classical carotid endarterectomy technique with primary closure in this study. In the literature, the primary closure technique has been reported to offer advantages in terms of lower restenosis and postoperative complication rates, as well as the absence of early mortality [16,17]. The use of this surgical approach in our series may be considered a factor that enhances the reliability of the hemodynamic outcomes obtained in this study.
In this study, the effect of clamping time on postoperative hypertension after carotid endarterectomy was investigated. Studies on the impact of cross-clamp duration on postoperative hypertension are scarce in the literature. Therefore, this study possesses a unique feature. It was found a statistically significant relationship was found between the cross-clamp duration applied to the carotid artery and the occurrence of postoperative hypertension in 80 cases undergoing carotid endarterectomy. After the procedure, substantial increases were observed in systolic and diastolic blood pressure values. Postoperative systolic and diastolic blood pressure values positively and significantly correlated with clamp duration. Notably, the correlation between systolic blood pressure and clamp duration (r = 0.795) was found to be higher than that for diastolic blood pressure (r = 0.756).
Our results are consistent with previous reports indicating that intraoperative factors, including surgical technique and clamp duration, play a pivotal role in postoperative blood pressure variability after carotid endarterectomy [18]. The identification of a specific clamp time threshold in our study adds a practical perspective to existing literature, suggesting that procedural timing is not only a technical detail but also a modifiable determinant of early hemodynamic stability. Integrating this consideration into operative planning could complement established perioperative blood pressure management strategies.
Limitation
This study has several limitations. It was conducted in a single center with a limited sample size, which may restrict generalizability. Long-term outcomes were not assessed, and only the first postoperative hour was analyzed, based on the assumption that baroreceptor-mediated hypertensive responses occur predominantly in this period. No additional 6–24-hour blood pressure data were collected; therefore, delayed or prolonged responses after pharmacologic intervention or other variables may not be fully reflected. Future studies with longer and continuous postoperative blood pressure monitoring in larger, multicenter cohorts are needed to confirm these findings.
Conclusion
Although the pathophysiology of hypertension commonly observed after carotid endarterectomy has not yet been fully clarified, its impact on postoperative morbidity and mortality is highly significant. Postoperative hypertension can adversely affect cerebral blood flow, increasing the risk of serious complications such as ischemic or hemorrhagic stroke. Additionally, hypertension may contribute to the development of neurological complications by raising intracranial pressure. Therefore, preventing postoperative hypertension can enhance patient comfort and reduce complication rates.
According to the results of our study, prolonged cross-clamp duration applied to completely occlude blood flow in the joint, internal, and external carotid arteries during carotid endarterectomy is associated with increased arterial hypertension values postoperatively. These findings suggest that longer cross-clamp durations may contribute to the rise in postoperative hypertension. Therefore, the cross-clamp duration during carotid endarterectomy should be kept as short as possible. Extended cross-clamp durations should be considered a critical factor in the management of postoperative hypertension, as this may have significant effects on patients’ overall health and the frequency of postoperative complications.
Scientific Responsibility Statement
The authors declare that they are responsible for the article’s scientific content including study design, data collection, analysis and interpretation, writing, some of the main line, or all of the preparation and scientific review of the contents and approval of the final version of the article.
Animal and Human Rights Statement
All procedures performed in this study were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki Declaration and its later amendments or compareable ethical standards.
Funding: None
Conflict of Interest
The authors declare that there is no conflict of interest.
References
1. Talathi S, Lipsitz EC. Current therapy for carotid webs. Ann Vasc Surg. 2025;113:415-20.
2. Greenstein AJ, Chassin MR, Wang J, et al. Association between minor and major surgical complications after carotid endarterectomy: results of the New York carotid artery surgery study. J Vasc Surg. 2007;46(6):1138-46.
3. Kazantsev AN, Lider RY, Korotkikh AV, et al. Effects of different types of carotid endarterectomy on the course of resistant arterial hypertension. Vascular. 2024;32(2):458-66.
4. Newman JE, Bown MJ, Sayers RD, et al. Post-carotid endarterectomy hypertension. Part 1: Association with pre-operative clinical, imaging, and physiological parameters. Eur J Vasc Endovasc Surg. 2017;54(5):551-63.
5. Uno M. History of carotid artery reconstruction around the world and in Japan. Neurol Med Chir (Tokyo). 2023;63(7):283-94.
6. Steinmetz E, Cottenet J, Mariet AS, et al. Editor’s choice – stroke and death following carotid endarterectomy or carotid artery stenting: a ten year nationwide study in France. Eur J Vasc Endovasc Surg. 2025;69(3):359-70.
7. Sultan S, Acharya Y, Dulai M, et al. Redefining postoperative hypertension management in carotid surgery: a comprehensive analysis of blood pressure homeostasis and hyperperfusion syndrome in unilateral vs. bilateral carotid surgeries and implications for clinical practice. Front Surg. 2024;11:1361963.
8. Teng L, Fang J, Zhang Y. Perioperative baseline β-blockers: an independent protective factor for post-carotid endarterectomy hypertension. Vascular. 2021;29(2):270-9.
9. Shoja MM, Rai R, Lachkar S, et al. The carotid sinus nerve and the first English translation of Hering’s original research on this nerve. Cureus. 2019;11(1):e3898.
10. Cao Q, Zhang J, Xu G. Hemodynamic changes and baroreflex sensitivity associated with carotid endarterectomy and carotid artery stenting. Interv Neurol. 2015;3(1):13-21.
11. Ajduk M, Tudorić I, Sarlija M, et al. Effect of carotid sinus nerve blockade on hemodynamic stability during carotid endarterectomy under local anesthesia. J Vasc Surg. 2011;54(2):386-93.
12. Tsuda K. Renin-Angiotensin system and sympathetic neurotransmitter release in the central nervous system of hypertension. Int J Hypertens. 2012;2012:474870.
13. Demirel S, Bruijnen H, Attigah N. The effect of eversion and conventional-patch technique in carotid surgery on postoperative hypertension. J Vasc Surg. 2011;54(1):80-6.
14. Ben Ahmed S, Daniel G, Benezit M, et al. Does the technique of carotid endarterectomy determine postoperative hypertension? Ann Vasc Surg. 2015;29(6):1272-80.
15. Stoneham MD, Thompson JP. Arterial pressure management and carotid endarterectomy. Br J Anaesth. 2009;102(4):442-52.
16. Arslantürk O, Keskin E. Revitalizing traditional carotid endarterectomy methods: a comprehensive review of primary closure techniques. Pam Med J. 2025;18(2):418-24.
17. Arslanturk O, Keskin E. Timing of carotid revascularization after acute ischemic stroke: A retrospective comparison of endarterectomy and stenting. Medicine Science. 2025;14(1):141-7.
18. Demirel S, Goossen K, Bruijnen H, et al. Systematic review and meta-analysis of postcarotid endarterectomy hypertension after eversion versus conventional carotid endarterectomy. J Vasc Surg. 2017;65(3):868-82.
Download attachments: 10.4328.ACAM.22830
Güler Gülsen Ersoy. The impact of carotid artery cross-clamp duration on postoperative hypertension following endarterectomy. Ann Clin Anal Med 2025;16(9):673-677
Citations in Google Scholar: Google Scholar
This work is licensed under a Creative Commons Attribution 4.0 International License. To view a copy of the license, visit https://creativecommons.org/licenses/by-nc/4.0/
Clinical effects of potential drug-drug interactions in hemorrhagic stroke patients admitted to the emergency department
Tugce Uskur 1, Oya Guven 2
1 Department of Medical Pharmacology, 2 Department of Emergency Medicine, Faculty of Medicine, Kırklareli University, Kırklareli, Turkey
DOI: 10.4328/ACAM.22836 Received: 2025-07-31 Accepted: 2025-08-30 Published Online: 2025-08-31 Printed: 2025-09-01 Ann Clin Anal Med 2025;16(9):678-682
Corresponding Author: Tugce Uskur, Department of Medical Pharmacology, Faculty of Medicine, Kırklareli University, Kırklareli, Turkey. E-mail: tugceuskur@gmail.com P: +90 507 447 27 07 Corresponding Author ORCID ID: https://orcid.org/0000-0001-6626-4859
Other Authors ORCID ID: Oya Guven, https://orcid.org/0000-0002-6389-4561
This study was approved by the Ethics Committee of Kırklareli University, Faculty of Medicine (Date: 2025-07-17, No: P20250032R01)
Aim: Hemorrhagic stroke is a medical condition characterized by non-traumatic intracranial bleeding, leading to high mortality and morbidity, and is frequently diagnosed in emergency departments. Its prevalence varies between countries, highlighting the importance of investigating modifiable risk factors. Drug-drug interactions (DDIs) may contribute to its development. This study aims to evaluate the medications used by hemorrhagic stroke patients over the past six months, assess potential DDIs, and examine their relationship with age, sex, and clinical parameters.
Materials and Methods: A retrospective analysis was conducted on patients diagnosed with hemorrhagic stroke both radiologically and clinically in the emergency department. Patients were categorized into three groups: hemorrhagic stroke risk interaction, non-risk interaction, and no interaction.
Results: Of 198 patients, 68% were male and 32% female, with a mean age of 61.93 ± 18.2 years. Chronic disease was present in 67.6% of patients; hypertension was the most common. At least one medication was used by 56.1%, and 49.5% of them had DDIs. Overall, DDIs were identified in 27.7% of all patients. Patients with hemorrhagic stroke risk interactions were significantly older (p < 0.001) and had more comorbidities (p < 0.0001). Hypertension was significantly more frequent in this group (p < 0.0001). Risk-related DDIs included 30 associated with elevated blood pressure, 12 with bleeding, and 2 with arrhythmia.
Discussion: DDIs are a modifiable risk factor in hemorrhagic stroke. Medication histories should be carefully reviewed, particularly in elderly, multimorbid patients. Caution is essential in patients using antihypertensives, and polypharmacy should be minimized to reduce interaction risks.
Keywords: hemorrhagic stroke, emergency department, drug-drug interaction
Introduction
Emergency departments are the primary units where patients with life-threatening conditions are first admitted, representing some of the busiest and most critical services within hospitals. One of the major diagnoses made in emergency departments is stroke, and a significant proportion of patients receive their stroke diagnosis in this setting [1]. Stroke is a medical condition characterized by cell death due to insufficient blood flow to a part of the brain [2].
Stroke is divided into two groups: ischemic and hemorrhagic stroke, which is responsible for 13% of all stroke cases, is a neurological condition caused by bleeding into the brain tissue [3,4]. Patients who experience a hemorrhagic stroke are reported to have more severe neurological impairments and higher mortality rates compared to those with ischemic stroke [5]. Stroke prevalence is higher in low- and middle-income countries, and mortality due to hemorrhagic stroke ranges between 25–30% in high-income countries, whereas it varies between 30–48% in low- and middle-income countries [6]. This disparity suggests that modifiable risk factors such as inadequate blood pressure control, delayed access to healthcare, and lack of appropriate treatment are not sufficiently managed in lower-income settings. Indeed, controlling high blood pressure—one of the most important modifiable risk factors—has been shown to significantly reduce the risk of recurrent stroke [7,8].
The literature highlights drug-drug interactions (DDIs) as one of the contributing factors to the failure of antihypertensive treatment [9,10]. A drug-drug interaction is defined as the effect that occurs when two or more drugs are administered concurrently and influence each other’s pharmacological or pharmacokinetic properties [11]. Among the potential DDI-related contributors to hemorrhagic stroke are increased bleeding risk, impaired platelet function, concurrent use of antithrombotic agents, neglect of age-related pharmacokinetic changes, and drug combinations that may induce cardiac arrhythmias [12–17].
However, current literature lacks sufficient data evaluating the frequency and clinical consequences of DDIs in patients who have experienced a hemorrhagic stroke. In particular, the extent to which the concomitant use of anticoagulants, antiplatelet agents, and NSAIDs increases the risk of intracranial hemorrhage that may lead to hemorrhagic stroke has not been clearly established. Furthermore, inadequate blood pressure control due to polypharmacy or the emergence of rhythm disturbances are also considered possible mechanisms contributing to the development of hemorrhagic stroke. These potential relationships have been addressed only to a limited extent in the context of DDIs in the existing literature.
This study aims to analyze the medications used in the last six months by patients diagnosed with hemorrhagic stroke in the emergency department, to evaluate the frequency of potential DDIs, the associations between interaction severity and patients’ age, sex, and certain clinical parameters (e.g., prognosis, length of hospital stay), and to assess the possible contribution of these interactions to the development of hemorrhagic stroke. The data obtained may support the development of safer drug management strategies in patients at risk for hemorrhagic stroke and provide guidance to clinicians regarding high-risk drug combinations.
Materials and Methods
Study Design and Population
This study was conducted as a retrospective analysis. Following the approval of the Ethics Committee for Non-Interventional Research at Kırklareli University Faculty of Medicine (approval number: P20250032R01), the study was initiated at Kırklareli Training and Research Hospital.
The study population consisted of patients who were diagnosed with hemorrhagic stroke both radiologically and clinically in the Emergency Department of Kırklareli Training and Research Hospital between January 1, 2023, and December 31, 2024, and who met the inclusion criteria.
Inclusion Criteria
• Patients diagnosed with hemorrhagic stroke in the emergency department between 01/01/2023 and 31/12/2024
• Individuals aged 18 years or older
• Patients for whom a brain computed tomography (CT) was requested
• Patients with a confirmed diagnosis of hemorrhagic stroke based on clinical and radiological findings
• Patients with complete medication information available in the hospital information management system
Exclusion Criteria
• Patients diagnosed with ischemic stroke
• Patients under 18 years of age
• Patients for whom brain CT was not performed
• Patients with incomplete or inaccessible medical records
Data Collection and Drug-Drug Interaction Assessment
During the data collection process, demographic information (age, sex), date of hospital admission, diagnosis, current medication list at the time of admission, and comorbidities were retrieved from the electronic patient record system. Information on medications used regularly within the last 6 months was obtained from prescription records and discharge summaries.
The collected medication data were classified into major, moderate, and minor interaction categories using Drugs.com, a widely used and clinically validated drug interaction classification system. In addition, patients were categorized into three groups based on the potential hemorrhagic stroke risk associated with their drug interactions: (1) interactions with an increased risk of hemorrhagic stroke, (2) interactions not associated with hemorrhagic stroke risk, and (3) no detected drug interactions.
Statistical Analysis
Statistical analyses were performed using GraphPad Prism version 8.0. Descriptive statistics were expressed as mean ± standard deviation or median (minimum–maximum) for continuous variables, and as number and percentage (%) for categorical variables. To assess the relationship between the presence of potential drug-drug interactions and patient characteristics (age, sex, comorbidities, prognosis, length of hospital stay, hemorrhage localization), the Chi-square test or Fisher’s Exact test (where appropriate) was used for categorical variables. For comparisons between continuous and categorical variables, the Independent Samples t-test or Mann-Whitney U test was applied. For comparisons among more than two groups, ANOVA or the Kruskal-Wallis test was utilized. A p-value of <0.05 was considered statistically significant.
Ethical Approval
This study was approved by the Ethics Committee of Non-Interventional Research at Kırklareli University, Faculty of Medicine (Date: 2025-07-17, No: P20250032R01).
Results
When examining the demographic data of the 198 patients included in the study, it was found that 68% (135) of the patients with hemorrhagic stroke were male, while 32% (63) were female. The mean age of male patients was 59.38 ± 17.9 years, and the mean age of female patients was 67.5 ± 17.9 years. The overall mean age of all 198 patients was 61.93 ± 18.2 years. When the presence of chronic diseases was evaluated, 134 patients (67.6%) were found to have at least one chronic condition, while 64 patients (32.4%) had no chronic disease. The most frequently observed chronic diseases were hypertension (87 patients), diabetes mellitus (33 patients), and coronary artery disease (23 patients), respectively. Based on bleeding localization, the classification of patients was as follows: 47 patients had subarachnoid hemorrhage (SAH), 45 had subdural hematoma, 10 had epidural hematoma, 63 had intracerebral hemorrhage, and 4 had intraventricular hemorrhage. Hemorrhage in other locations was detected in 29 patients. Regarding the length of hospital stay, the mean duration was found to be 15.6 days. In terms of prognosis, 136 patients (68.7%) were discharged, while 62 patients (31.3%) died (Table 1). When the medication use of the patients within the last six months was examined, it was found that 87 patients (43.9%) had not used any medication, while 111 patients (56.1%) had used at least one medication. In the drug-drug interaction analysis, drug-drug interactions were identified in 55 (49.5%) of the 111 patients who had used medication, and the overall rate of patients with detected drug-drug interactions among all patients was 27.7% (Figure 1). A total of 111 drug-drug interactions were identified in 55 patients in whom drug interactions were detected. When classified according to severity, 24 were minor, 82 were moderate, and 5 were major interactions (Table 2). When patients were categorized according to the presence of drug interactions, 143 patients had no drug-drug interactions, 23 patients had interactions that did not pose a risk for hemorrhagic stroke, and 32 patients had interactions associated with an increased risk of hemorrhagic stroke (Table 3). A significant difference was observed in age distribution among the three groups classified based on drug interaction status (Kruskal-Wallis test, p < 0.001). According to the results of Dunn’s multiple comparisons test, the mean age of patients without any drug interactions was found to be significantly lower than that of the group with interactions associated with a risk of hemorrhagic stroke (p = 0.0009). No significant differences were identified between the other groups (Figure 2). When the prognoses were compared, although the rate of exitus was relatively higher in the group with drug interactions associated with hemorrhagic stroke risk, no statistically significant difference was observed between the groups in terms of prognosis (p = 0.5052). Although no statistically significant difference was found in the average length of hospital stay among the three groups, the group with the highest length of stay was again the group with drug interactions associated with hemorrhagic stroke risk (p = 0.4190). The average number of chronic diseases per patient was calculated as 0.81 in the non-interaction group, 2.0 in the group with interactions associated with hemorrhagic stroke risk, and 1.69 in the group with non-risk interactions. The presence of two or more chronic diseases showed a statistically significant difference according to drug interaction groups (p < 0.0001). This rate was lower in the non-interaction group and higher in the groups with interactions. When the distribution of hypertension, the most frequently observed comorbidity, was examined across the drug interaction groups, 39 patients (male = 25, female = 14) in the non-interaction group, 16 patients (male = 10, female = 6) in the non-risk interaction group, and 27 patients (male = 17, female = 10) in the hemorrhagic stroke risk interaction group were found to have hypertension. The frequency of hypertension differed significantly across interaction groups (Chi-square test, p < 0.0001). In particular, the prevalence of hypertension was markedly higher in the drug interaction group associated with hemorrhagic stroke risk. Although hypertension was more frequently observed in male patients within the hemorrhagic stroke risk interaction group, this difference was not statistically significant (p = 0.9213). When the interactions in the hemorrhagic stroke risk group were examined, a total of 30 interactions potentially causing high blood pressure were identified in 21 patients; 12 bleeding-related interactions in 10 patients; and 2 interactions potentially leading to irregular heart rhythm in 2 patients.
Discussion
Among the 198 patients included in the study, an analysis of demographic data revealed that 68% (n = 135) were male and 32% (n = 63) were female. A 2020 study investigating potential drug-drug interactions (DDIs) in 177 patients diagnosed with either hemorrhagic or ischemic stroke reported a similar gender distribution, with 64% male and 36% female patients [18]. Likewise, another study published in the same year involving 3,233 patients showed a comparable distribution [19]. The higher prevalence of hemorrhagic stroke in males may be related to the greater incidence of hypertension in men [20,21]; similarly, in our study, hypertension was more frequently observed among male patients in the group with DDIs associated with hemorrhagic stroke risk.
According to our findings, the mean age of the patients was 61.93 (±18.2) years. In a study by Neves et al. involving over 500,000 stroke patients, the average age was similarly reported as 63.5 (±16.6) years [22]. The mean age of male patients was 59.38 (±17.9), while that of female patients was 67.5 (±17.9). The lower number of female patients and their higher average age may be attributed to the neuroprotective effects of estrogen, which supports cerebral capillary health and reduces stroke risk in women [23].
Existing literature indicates that comorbidities are common in stroke patients, particularly those with hemorrhagic stroke. Consistent with our findings, the most frequent comorbidities reported are hypertension, diabetes mellitus, and dyslipidemia [22].
Although hemorrhagic stroke is less common compared to other types of stroke, it is known to have higher mortality rates. Mortality can vary between 25% and 60%, depending on various factors such as age, sex, presence of comorbidities, and the localization of the hemorrhage [24]. In our study, 31.3% of the patients were found to have died.
Although limited in number, studies evaluating potential DDIs in patients with hemorrhagic stroke have suggested that 56.1% of the patients in our sample had used at least one medication within the last six months, and nearly half of them had potential drug-drug interactions. This highlights the importance of thorough medication history-taking.
When patients were grouped based on the presence of potential interactions, it was notable that the mean age in the group with DDIs associated with hemorrhagic stroke risk was significantly higher than in the non-interaction group. This finding may be explained by the greater prevalence of polypharmacy in elderly individuals. It also suggests that advanced age may increase susceptibility to drug interactions that elevate bleeding risk due to pharmacokinetic and pharmacodynamic changes [25].
Although not statistically significant, the higher mortality rate and longer hospital stays in the group with hemorrhagic-risk DDIs are important clinical findings. These results suggest that certain drug interactions may complicate the clinical picture, worsen prognosis, and could potentially reach statistical significance with a larger sample size.
The significantly higher prevalence of chronic diseases in the interaction groups underscores that polypharmacy and drug-drug interactions are markedly more frequent among individuals with multiple comorbidities.
Consistent with the literature, hypertension was the most frequently observed comorbidity among patients with hemorrhagic stroke, particularly in those with drug interactions associated with hemorrhagic risk. It is known that certain drug combinations can elevate blood pressure or reduce the effectiveness of antihypertensive therapies. Notably, our findings showed that a significant portion of drug interactions associated with hemorrhagic risk had the potential to induce hypertension. Given that hypertension is a major risk factor for hemorrhagic stroke, this finding holds considerable clinical importance.
Limitation
Our study has several limitations due to its retrospective design. Information regarding patients’ medication use includes only drugs prescribed within the hospital system where the study was conducted. Medications obtained from other hospitals, healthcare institutions, or over-the-counter drugs without a prescription were not captured in our data. Despite these limitations, since the data were obtained from the largest and sole central hospital in the city, the results may be representative and provide valuable insights for future studies.
Conclusion
In conclusion, although hemorrhagic stroke remains a significant cause of morbidity and mortality worldwide, drug-drug interactions and their associated clinical consequences represent a modifiable risk factor in the management of these patients. Therefore, a detailed medication history should be carefully reviewed in patients with hemorrhagic stroke, and potential drug-drug interactions must be thoroughly assessed. Particular attention should be paid to elderly and multimorbid patients, and polypharmacy should be avoided whenever possible—especially in individuals receiving antihypertensive treatment, where the risk of clinically significant interactions may be higher.
Scientific Responsibility Statement
The authors declare that they are responsible for the article’s scientific content including study design, data collection, analysis and interpretation, writing, some of the main line, or all of the preparation and scientific review of the contents and approval of the final version of the article.
Animal and Human Rights Statement
All procedures performed in this study were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki Declaration and its later amendments or compareable ethical standards.
Funding: None
Conflict of Interest
The authors declare that there is no conflict of interest.
References
1. Harbison J, Hossain O, Jenkinson D. Diagnostic accuracy of stroke referrals from primary care, emergency room physicians, and ambulance staff using the face, arm speech test. Stroke. 2003;34(1),71-6.
2. Sekerdag E, Solaroglu I, Gursoy-Ozdemir Y. Cell death mechanisms in stroke and novel molecular and cellular treatment options. Curr Neuropharmacol. 2018;16(9):1396-415.
3. González-Pérez A, Gaist D, Wallander MA. Mortality after hemorrhagic stroke: data from general practice (The Health Improvement Network). Neurology. 2013;81(6):559-65.
4. Konduru SST, Ranjan A, Bollisetty A. Assessment of risk factors influencing functional outcomes in cerebral stroke patients using the modified Rankin Scale. World J Pharm Pharm Sci. 2018;7(3):755-69
5. Benjamin EJ, Virani SS, Callaway CW, et al. Heart disease and stroke statistics-2018 Update: A Report From the American Heart Association. Circulation. 2018;137(12):e493.
6. GBD 2019 Stroke Collaborators. Global, regional, and national burden of stroke and its risk factors, 1990-2019: a systematic analysis for the Global Burden of Disease Study 2019. Lancet Neurol. 2021;20(10):795-820.
7. Oza R, Rundell K, Garcellano M. Recurrent ischemic stroke: Strategies for Prevention. Am Fam Physician. 2017;96(7):436-40.
8. Allen CL, Bayraktutan U. Risk factors for ischemic stroke. Int J Stroke. 2008;3(2):105-16.
9. Basile JN, Bloch MJ. Identifying and managing factors that interfere with or worsen blood pressure control. Postgrad Med. 2010;122(2):35-48.
10. Elliott WJ. Drug interactions and drugs that affect blood pressure. J Clin Hypertens (Greenwich). 2006;8(10):731-7.
11. Tannenbaum C, Sheehan NL. Understanding and preventing drug-drug and drug-gene interactions. Expert Rev Clin Pharmacol. 2014;7(4):533-44.
12. Turan G. The mechanism of coagulation and anticoagulant drugs. Bosphorus Med J. 2016;3(2):71-5.
13. Ceulemans A, Spronk HMH, Ten Cate H. Current and potentially novel antithrombotic treatment in acute ischemic stroke. Thromb Res. 2024;236(4):74-84.
14. Topçuoğlu MA. Serebrovasküler hastalıklarda hipertansiyon tedavisi [Hypertension treatment in cerebrovascular diseases]. Turk. Klin. Nefroloji Özel (Online). 2009;2(3):56-63.
15. Meacham KS, Schmidt JD, Sun Y, et al. Impact of intravenous antihypertensive therapy on cerebral blood flow and neurocognition: A systematic review and meta-analysis. Br J Anaesth. 2025;134(3):713-26.
16. Ngcobo, N.N. Influence of ageing on the pharmacodynamics and pharmacokinetics of chronically administered medicines in geriatric patients: A review. Clin Pharmacokinet. 2025;64(3):463.
17. Sposato LA, Cameron AC, Johansen MC, et al. Ischemic stroke prevention in patients with atrial fibrillation and a recent ischemic stroke, TIA, or intracranial hemorrhage: A World Stroke Organization (WSO) scientific statement. Int J Stroke. 2025;20(4):385-400.
18. Firdous S, Awad SAS, Fatima A. To estimate the incidence of potential drug-drug interaction in stroke patients admitted ın a tertiary care hospital, Telangana. Int J Curr Pharm Res. 2020;12(2):48-52.
19. Sandset EC, Wang X, Carcel C, et al. Sex differences in treatment, radiological features, and outcome after intracerebral haemorrhage: Pooled analysis of Intensive Blood Pressure Reduction in Acute Cerebral Haemorrhage trials 1 and 2. Eur Stroke J. 2020;5(4):345-50.
20. Bourdon F, Ponte B, Dufey Teso A. The impact of sex on blood pressure. Curr Opin Nephrol Hypertens. 2025;34(4):322-29.
21. NCD Risk Factor Collaboration (NCD-RisC). Worldwide trends in hypertension prevalence and progress in treatment and control from 1990 to 2019: A pooled analysis of 1201 population-representative studies with 104 million participants. Lancet. 2022;399(10324):520.
22. Neves G, Cole T, Lee J. Demographic and institutional predictors of stroke hospitalization mortality among adults in the United States. eNeurologicalSci. 2022;22(26):100392.
23. Amala J, Akshay AT, Shanty MJ, et al. Assessment of potential drug-drug interaction in stroke patients. J Clin Diagn Res. 2022;16(7):15-20.
24. Eren F, Yücel K. Comparison of blood lipid parameters in patients with ischemic and hemorrhagic stroke and its relationship with mortality. Sakarya Tıp Dergisi (Online). 2021;11(2):400-08.
25. Caratozzolo S, Gipponi S, Marengoni A, et al. Potentially serious drug-drug interactions in older patients hospitalized for acute ischemic and hemorrhagic stroke. Eur Neurol. 2016;76(3-4):161-6.
Download attachments: 10.4328.ACAM.22836
Tugce Uskur, Oya Guven. Clinical effects of potential drug-drug interactions in hemorrhagic stroke patients admitted to the emergency department. Ann Clin Anal Med 2025;16(9):678-682
Citations in Google Scholar: Google Scholar
This work is licensed under a Creative Commons Attribution 4.0 International License. To view a copy of the license, visit https://creativecommons.org/licenses/by-nc/4.0/