Expert Panel Reaffirms Beta-blockers as First-Line Therapy for Hypertension in India

India: An expert panel in India came together to develop a consensus on the role of beta-blockers in managing hypertension. The Indian consensus, published in the Journal of the Association of Physicians of India (JAPI), developed graded recommendations regarding the clinical role of beta-blockers in managing hypertension (HTN), HTN with additional cardiovascular (CV) risk, and type 2 diabetes mellitus (T2DM).

Hypertension, a condition affecting millions worldwide, remains a significant health challenge, especially in India, where early cardiovascular disease (CVD) is a growing concern. Despite advances in treatment, the management of essential hypertension continues to be difficult, with control achieved in fewer than 1 in 10 cases, particularly when aligned with updated guidelines from the American College of Cardiology (ACC) and the International Society of Hypertension (ISH). In response to these challenges, the positioning of beta-blockers, particularly nebivolol, has evolved, with a focus on their relevance to the Indian population’s unique characteristics, including premature CVD, fragile coronary architecture, and high resting heart rates.

The panel, comprising clinical and interventional cardiologists, synthesized current evidence to provide graded recommendations based on guidelines, including the 2023 updates of the European Society of Hypertension (ESH). The Indian consensus aims to clarify the clinical role of beta-blockers in treating hypertension, particularly when compounded by comorbid conditions such as T2DM and cardiovascular risk factors.

The consensus findings emphasize the pivotal role of beta-blockers, including nebivolol, in hypertension management. An overwhelming 94% of the panelists agreed that the 2023 ESH hypertension guidelines have increased confidence in the use of beta-blockers as a first-line therapy for HTN. In particular, beta-blockers are recommended for individuals with hypertension who exhibit high resting heart rates, such as younger hypertensive patients under the age of 40. Moreover, for those under 60 years old, regardless of comorbid conditions, beta-blockers are considered the preferred drug choice.

For hypertensive patients with T2DM, 95% of the experts favored nebivolol due to its beneficial effects in managing both hypertension and diabetes. Nebivolol was also deemed the most suitable beta-blocker for patients with angina, with metoprolol and bisoprolol following closely behind in preference. In patients with chronic obstructive pulmonary disease (COPD) and hypertension, nebivolol was once again the most preferred option, as it tends to cause fewer pulmonary side effects compared to other beta-blockers.

The study also highlighted the importance of combining nebivolol with angiotensin receptor blockers (ARBs) in patients who do not respond well to ARB monotherapy. This combination therapy was shown to be particularly effective for hypertensive patients with diabetes and those suffering from ischemic heart disease (IHD), such as those presenting with angina or myocardial infarction (MI).  

Consensus on the Role of Beta-blockers in Hypertension
Treatment

  1. First-line
    Treatment
    Beta-blockers are classified as first-line drugs for the treatment of
    hypertension (HTN).
  • 94%
    of respondents stated that the updated ESH HTN guidelines (2023) have
    increased confidence in using beta-blockers for HTN management.
  • Beta-blockers
    in Comorbid Conditions
    Beta-blockers are recommended in the treatment of HTN, particularly for
    patients with comorbid conditions such as myocardial infarction (MI),
    heart failure with reduced ejection fraction (HFrEF), and atrial
    fibrillation (AF).
    • They
      are also beneficial in patients with high resting heart rates, providing
      additional protection.
  • High
    Resting Heart Rates
    Beta-blockers should be prescribed to patients with HTN who have high
    resting heart rates, including younger hypertensives under the age of 40.
  • Resting
    Heart Rate and Blood Pressure
    In the general population, resting heart rate is a significant predictor
    of blood pressure and mortality risk.
    • Beta-blockers
      are recommended for patients under 60 years of age with HTN, regardless
      of the presence of comorbid diseases.
  • Nebivolol
    in Hypertensive Patients with Diabetes
    Nebivolol is the most preferred beta-blocker for hypertensive patients
    with diabetes.
    • Other
      commonly preferred options include bisoprolol and metoprolol.
  • Beta-blockers
    in COPD and HTN
    Nebivolol is the preferred beta-blocker for patients with both chronic
    obstructive pulmonary disease (COPD) and HTN due to its favorable side
    effect profile.
  • Nebivolol
    + ARB Combination
    The combination of nebivolol and angiotensin receptor blockers (ARBs) can
    be considered in:
    • Patients
      who do not respond well to ARB monotherapy
    • Hypertensive
      patients with diabetes
    • Patients
      with HTN and ischemic heart disease (IHD), particularly those
      experiencing angina or myocardial infarction (MI).

    “The consensus firmly places beta-blockers, especially nebivolol, at the forefront of hypertension treatment in India. The findings support their use in various patient subsets, including those with high resting heart rates, younger hypertensives, and patients with comorbid conditions like T2DM and COPD. Given the unique health challenges in India, these updated recommendations provide a comprehensive approach to managing hypertension, ensuring better long-term outcomes for patients,” the researchers concluded.

    Reference:

    Mohan JC, Roy DG, Ray S, et al. Position of Beta-blockers in the Treatment of Hypertension Today: An Indian Consensus. J Assoc Physicians India 2024;72(10):83-90.

    Powered by WPeMatico

    C-Reactive Protein-Triglyceride Glucose Index Key Predictor of Stroke in Hypertensive Patients, finds study

    China: A national study in hypertensive patients revealed an association between elevated C-reactive protein-triglyceride glucose index (CTI) and increased stroke risk. Over 7 years, each one-unit rise in CTI was linked to a 21% higher stroke risk. Patients in the highest CTI quartile had a 66% greater stroke risk compared to those in the lowest quartile.

    The researchers suggested that CTI could serve as a valuable predictive marker for stroke in hypertensive patients. The findings were published online in Diabetology & Metabolic Syndrome on November 21, 2024.

    Both the triglyceride-glucose (TyG) index, an indicator of insulin resistance (IR), and inflammation are known risk factors for stroke in hypertensive patients. However, few studies have combined the TyG index and inflammation markers to predict stroke risk in this population. The C-reactive protein-triglyceride-glucose index is a new marker that offers a comprehensive assessment of both IR and inflammation severity.

    Against the above background, Xin Chen, The Third People’s Hospital of Chengdu, Chengdu, Sichuan, China, and colleagues investigated the relationship between CTI and stroke risk in patients with hypertension.

    For this purpose, the researchers recruited 3,834 hypertensive patients without a history of stroke at baseline from the China Health and Retirement Longitudinal Study (CHARLS). To assess the relationship between CTI and stroke risk, they used multivariate Cox regression and restricted cubic spline (RCS) analyses. Additionally, the Boruta algorithm was employed to evaluate the significance of CTI and develop prediction models to forecast stroke incidence in the study cohort.

    The study revealed the following findings:

    • After 7 years of follow-up, 9.6% of hypertensive patients (368 cases) experienced a stroke.
    • Multivariate Cox regression analysis showed that each one-unit increase in CTI was associated with a 21% higher stroke risk (HR = 1.21).
    • Patients in the top CTI quartile were 66% more likely to have a stroke compared to those in the bottom quartile (HR = 1.66).
    • RCS analysis confirmed a linear relationship between CTI and stroke risk.
    • The Boruta algorithm validated CTI as an important predictor of stroke risk.
    • The Support Vector Machine (SVM) survival model showed the best predictive performance for stroke risk, with an area under the curve (AUC) of 0.956.

    To summarize, the study utilized the CHARLS database and a novel composite indicator of insulin resistance (IR) and inflammation, called CTI, to assess stroke risk in a hypertensive population.

    “The results demonstrated a significant link between higher CTI levels and an increased stroke risk, indicating that CTI could serve as a valuable predictive biomarker for stroke in hypertensive patients,” the researchers concluded.

    Reference:

    Tang, S., Wang, H., Li, K. et al. C-reactive protein-triglyceride glucose index predicts stroke incidence in a hypertensive population: a national cohort study. Diabetol Metab Syndr 16, 277 (2024). https://doi.org/10.1186/s13098-024-01529-z

    Powered by WPeMatico

    Coronary CT angiography screening Offers No Mortality Benefit for People with Diabetes: Study

    Researchers have found in a new study that Coronary CT angiography (CCTA) screening did not reduce mortality among individuals with diabetes.

    Screening asymptomatic adults with diabetes for coronary artery disease using CCTA did not significantly reduce the risk of mortality or nonfatal myocardial infarction, according to extended follow-up data from the FACTOR-64 study.

    While coronary heart disease and diabetes are often seen in the same patients, a diagnosis of diabetes does not necessarily mean that patients also have coronary heart disease, according to a new study from researchers at Intermountain Health in Salt Lake City.

    The Intermountain study found that proactively screening patients with diabetes 1 and 2 for coronary heart disease who have not shown symptoms of heart problems does not improve long-term mortality rates, nor does it lower the chance of them having a heart attack or stroke in the future.

    “Our study found that doing these kinds of screenings on patients doesn’t make any marked difference in their long-term survival rates,” said J. Brent Muhlestein, principal investigator of the study and co-director of cardiovascular research at Intermountain Health.

    “Instead, results of our study reinforce that we should be focusing on other proven interventions for people with diabetes, like medication management, diet and exercise, to help these patients lead long and healthy lives,” he added.

    Findings from the Intermountain Health study were presented at the American Heart Association’s Scientific Sessions 2024 in Chicago on Monday, November 18.

    For the study, Intermountain Health researchers examined data from FACTOR-64 study, a randomized clinical trial of 900 people with type 1 or type 2 diabetes for at least three to five years without coronary artery disease symptoms.

    Of these patients, 452 were screened with a coronary computed tomography angiography (CCTA), which uses a powerful X-ray to make a 3D image of the heart. Researchers compared

    these patients with a control group of 448 people who were treated with standard optimal diabetes care guidelines.

    Enrollment occurred between July 2007 and May 2013, and followed up on in May 2024.

    Researchers found that using CCTA to screen for cardiovascular disease did not significantly affect the rates of all-cause mortality or of non-fatal heart incidents, like heart attack and stroke. This was true when researchers initially followed up four or five years after the CCTA screening, and again at 12 years.

    “These findings should discourage the use of CCTA for screenings in diabetes patients who do not show any symptoms of heart disease,” said Dr. Muhlestein.

    “While our study showed that screening for coronary heart disease with CCTA won’t make any difference, it did show that if patients carefully medically manage their diabetes, they may live almost as long as somebody who does not have diabetes, which wasn’t the case before,” said Dr. Muhlestein. “This study shows that, even over an extended period of time, a screening like this can’t replace those kinds of critical behaviors.”

    Powered by WPeMatico

    Study to Revolutionize Drug Delivery: Harnessing Machine Learning Optimizes Remifentanil Pharmacokinetics

    Remifentanil is a useful choice for medical interventions that require rapid pain relief due to its quick onset of action upon administration. One key feature of remifentanil is its brief half-life, allowing precise management and swift recovery post-medication cessation. Recent research paper investigates the use of supervised machine learning methods to analyze the pharmacokinetic characteristics of the opioid drug remifentanil. The goal is to improve the prediction of the drug’s analgesic effects, which is crucial for target-controlled drug delivery systems.

    The study utilizes a dataset from the Kaggle database that includes information on administering intravenous infusions of remifentanil to 65 individuals, with measurements of drug concentration over time. Features used in the analysis include age, gender, infusion rate, body surface area, and lean body mass.

    Regression Algorithm Comparison

    The researchers compare the performance of five different regression algorithms – fine tree, bagged tree, fine Gaussian support vector machine (SVM), wide neural network, and exponential Gaussian process regression (GPR) – in predicting the remifentanil concentration at a given time. The results show that the prediction algorithms outperform traditional pharmacokinetic and pharmacodynamic models in terms of accuracy and mean squared error.

    Model Performance Analysis

    Specifically, the GPR model yielded the lowest root mean squared error and mean absolute error, as well as the best R-squared value. The researchers further optimized the GPR model, reducing the mean squared error to 5.4003. They note that incorporating additional patient factors like hepatic and renal function, comorbidities, and cardiac output could further enhance the accuracy of the pharmacokinetic predictions.

    Conclusion

    The paper concludes that applying machine learning in drug delivery can significantly reduce resource costs and the time and effort required for laboratory experiments in the pharmaceutical industry. The models developed can enable personalized dosing regimens, help minimize adverse effects like respiratory depression, and improve the titration of remifentanil infusions. Overall, this research demonstrates the potential of supervised learning techniques to advance pharmacokinetic modeling and optimize opioid therapy.

    Key Points

    1. The study investigates the use of supervised machine learning methods to analyze the pharmacokinetic characteristics of the opioid drug remifentanil, with the goal of improving the prediction of the drug’s analgesic effects for target-controlled drug delivery systems.

    2. The study utilizes a dataset from the Kaggle database that includes information on administering intravenous infusions of remifentanil to 65 individuals, with measurements of drug concentration over time. The features used in the analysis include age, gender, infusion rate, body surface area, and lean body mass.

    3. The researchers compare the performance of five different regression algorithms – fine tree, bagged tree, fine Gaussian support vector machine (SVM), wide neural network, and exponential Gaussian process regression (GPR) – in predicting the remifentanil concentration at a given time. The results show that the prediction algorithms outperform traditional pharmacokinetic and pharmacodynamic models in terms of accuracy and mean squared error.

    4. The GPR model yielded the lowest root mean squared error and mean absolute error, as well as the best R-squared value. The researchers further optimized the GPR model, reducing the mean squared error to 5.4003.

    5. The researchers note that incorporating additional patient factors like hepatic and renal function, comorbidities, and cardiac output could further enhance the accuracy of the pharmacokinetic predictions.

    6. The paper concludes that applying machine learning in drug delivery can significantly reduce resource costs and the time and effort required for laboratory experiments in the pharmaceutical industry, and the models developed can enable personalized dosing regimens, help minimize adverse effects like respiratory depression, and improve the titration of remifentanil infusions.

    Reference –

    Prathvi Shenoy et al. (2024). Data-Based Regression Models For Predicting Remifentanil Pharmacokinetics. *Indian Journal Of Anaesthesia*. https://doi.org/10.4103/ija.ija_549_24

    Powered by WPeMatico

    WHO Grants Prequalification to Rapid Molecular TB Test

    The World Health Organization (WHO) has announced the prequalification of the Xpert MTB/RIF Ultra, a rapid molecular diagnostic test for tuberculosis (TB).This nucleic acid amplification test detects Mycobacterium tuberculosis in sputum samples and provides results within hours. It can also identify mutations linked to rifampicin resistance, a key marker of multidrug-resistant TB.

    Tuberculosis is one of the world’s leading infectious disease killers, causing over a million deaths annually and imposing immense socioeconomic burdens, especially in low- and middle-income countries. Accurate and early detection of TB, especially drug-resistant strains, remains a critical and challenging global health priority.

    “This first prequalification of a diagnostic test for tuberculosis marks a critical milestone in WHO’s efforts to support countries in scaling up and accelerating access to high-quality TB assays that meet both WHO recommendations and its stringent quality, safety and performance standards,” said Dr Yukiko Nakatani, WHO Assistant Director-General for Access to Medicines and Health Products. “It underscores the importance of such groundbreaking diagnostic tools in addressing one of the world’s deadliest infectious diseases.”

    WHO prequalification of this test is expected to assure quality of diagnostic tests used to improve access to early diagnosis and treatment. It complements WHO’s endorsement approach, which is grounded in emerging evidence, diagnostic accuracy, and patient outcomes alongside considerations for accessibility and equity, with prequalification requirements on quality, safety, and performance.

    WHO’s assessment for prequalification is based on information submitted by the manufacturer, Cepheid Inc., and the review by Singapore’s Health Sciences Authority (HSA), the regulatory agency of record for this product.

    Designed for use on the GeneXpert® Instrument System, this nucleic acid amplification test (NAAT) Xpert® MTB/RIF Ultra detects the genetic material of Mycobacterium tuberculosis, the bacterium that causes TB, in sputum samples, and provides accurate results within hours. Simultaneously, the test identifies mutations associated with rifampicin resistance, a key indicator of multidrug-resistant TB.

    It is intended for patients who screen positive for pulmonary TB and who have either not started anti-tuberculosis treatment or received less than three days of therapy in the past six months.

    “High-quality diagnostic tests are the cornerstone of effective TB care and prevention,” said Dr Rogerio Gaspar, WHO Director for Regulation and Prequalification. “Prequalification paves the way for equitable access to cutting-edge technologies, empowering countries to address the dual burden of TB and drug-resistant TB.”

    In a joint effort by WHO Global TB Programme and the Department of Regulation and Prequalification to improve access to quality-assured TB tests and expand diagnostic options for countries, WHO is currently assessing seven additional TB tests. 

    Powered by WPeMatico

    Brain scan predicts effectiveness of spinal cord surgery, reveals research

    A 10-minute brain scan can predict the effectiveness of a risky spinal surgery to alleviate intractable pain. The Kobe University result gives doctors a much-needed biomarker to discuss with patients considering spinal cord stimulation.

    For patients with chronic pain that cannot be cured in any other way, a surgical procedure called “spinal cord stimulation” is seen as a method of last resort. The treatment works by implanting leads into the spine of patients and electrically stimulating the spinal cord. Because the spinal cord transmits sensations to the brain from all over the body, the position of the leads is adjusted so that the patients feel the stimulation at the site of the pain. The Kobe University anesthesiologist UENO Kyohei says: “A big issue is that the procedure is effective for some but not for other patients, and which is the case is usually evaluated in a short trial of a few days to two weeks prior to permanent implantation. Although this trial is short, it is still an invasive and risky procedure. Therefore, clinicians have long been interested in the possibility of predicting a patient’s responsiveness to the procedure through non-invasive means.”

    Functional magnetic resonance imaging, or fMRI, has become a standard tool to visualize how the brain processes information. More precisely, it can show which parts of the brain are active in response to a stimulus, and which regions are thus functionally connected with each other. “In an earlier study, we reported that for the analgesic ketamine, pain relief correlates negatively with how strongly connected two regions of the default mode network are before the drug’s administration,” explains Ueno. The default mode network, which plays an important role in self-related thought, has previously been implicated in chronic pain. Another relevant factor is how the default mode network connects with the salience network, which is involved in regulating attention and the response to stimuli. Ueno says, “Therefore, we wanted to examine whether the correlation of the activities within and between these networks could be used to predict responsiveness to spinal cord stimulation.”

    He and his team published their results in the British Journal of Anaesthesia. They found that the better patients responded to spinal cord stimulation therapy, the weaker a specific region of the default mode network was connected to one in the salience network. Ueno comments, “Not only does this offer an attractive biomarker for a prognosis for treatment effectiveness, it also strengthens the idea that an aberrant connection between these networks is responsible for the development of intractable chronic pain in the first place.”

    Undergoing an fMRI scan is not the only option. Combining pain questionnaires with various clinical indices has been reported as another similarly reliable predictor for a patient’s responsiveness to spinal cord stimulation. However, the researchers write that “Although the cost of an MRI scan is controversial, the burden on both patients and providers will be reduced if the responsiveness to spinal cord stimulation can be predicted by one 10-minute resting state fMRI scan.”

    In total, 29 patients with diverse forms of intractable chronic pain participated in this Kobe University study. On the one hand, this diversity is likely the reason why the overall responsiveness to the treatment was lower compared to similar studies in the past and also made it more difficult to accurately assess the relationship between brain function and the responsiveness. On the other hand, the researchers also say that, “From a clinical perspective, the ability to predict outcomes for patients with various conditions may provide significant utility.” Ueno adds: “We believe that more accurate evaluation will become possible with more cases and more research in the future. We are also currently conducting research on which brain regions are strongly affected by various patterns of spinal cord stimulation. At this point, we are just at the beginning of this research, but our main goal is to use functional brain imaging as a biomarker for spinal cord stimulation therapy to identify the optimal treatment for each patient in the future.”

    Reference:

    Kyohei Ueno, Yoshitetsu Oshiro, Shigeyuki Kan, Yuki Nomura, Hitoaki Satou, Norihiko Obata, Satoshi Mizobuchi. Resting-state brain functional connectivity in patients with chronic intractable pain who respond to spinal cord stimulation therapy. British Journal of Anaesthesia, 2024; DOI: 10.1016/j.bja.2024.10.011

    Powered by WPeMatico

    Study finds central obesity as effective predictor of colorectal cancer

    A new study published in the International Journal of Obesity showed that the majority of the colorectal cancer (CRC) risk associated with obesity may be attributed to central obesity, which is a far better predictor of CRC.

    One known risk factor for colorectal cancer is general obesity, which is frequently measured by body mass index (BMI). It is unclear, though, how much of this link may be explained by central adiposity. This study by Fatemeh Safizadeh and colleagues wanted to determine if and to what degree waist circumference (WC), waist-to-hip ratio (WHR), and BMI are independent of one another in relation to CRC risk.

    Over 500,000 male and female participants between the ages of 40 and 69 who were enrolled in the UK Biobank project between 2006 and 2010 had their data examined. Hazard ratios (HR) and their 95% confidence intervals (CI) were computed after multivariable Cox proportional hazards models were constructed.

    A total of 5,977 of the 460,784 individuals who had a median follow-up of 12.5 years developed colorectal cancer. BMI, WHR, and WC all had multivariable adjusted HRs (95% CIs) of 1.10 (1.07–1.13), 1.18 (1.14–1.22), and 1.14 (1.11–1.18) per standard deviation rise, respectively.

    Following reciprocal correction, the correlation with CRC was significantly reduced for BMI (1.04 (1.01–1.07)) and maintained for WHR (1.15 (1.11–1.20)). Also, throughout all BMI categories, WHR showed strong, statistically significant relationships with CRC risk, but within WHR categories, BMI connections with CRC risk were modest and not statistically significant.

    Following reciprocal adjustment, BMI was likewise not linked to women’s risk of colorectal cancer or rectal cancer. On the other hand, WHR was significantly linked to the risk of colon and rectal cancer in both sexes, both before and after controlling for BMI. Because of their strong association, BMI and WC could not be corrected for one another.

    Overall, this study evaluated separately the relationships between central and general obesity and the risk of colon, rectal, and colorectal cancer. The findings discovered that central obesity probably accounts for the majority of the obesity-related CRC risk in a population that is primarily European.

    Reference:

    Safizadeh, F., Mandic, M., Schöttker, B., Hoffmeister, M., & Brenner, H. (2024). Central obesity may account for most of the colorectal cancer risk linked to obesity: evidence from the UK Biobank prospective cohort. In International Journal of Obesity. Springer Science and Business Media LLC. https://doi.org/10.1038/s41366-024-01680-7

    Powered by WPeMatico

    Children who attend day care less likely to develop type 1 diabetes: JAMA

    Children who attend day care less likely to develop type 1 diabetes suggests a study published in the JAMA.

    A meta-analysis published in 2001 suggested that exposure to infections measured by day care attendance may be important in the pathogenesis of type 1 diabetes. Several new studies on the topic have since been published.

    A study was done to investigate the association between day care attendance and risk of type 1 diabetes and to include all available literature up to March 10, 2024. Studies that reported a measure of association between day care attendance and risk of type 1 diabetes were included. Data Extraction and Synthesis Details, including exposure and outcome assessment and adjustment for confounders, were extracted from the included studies.

    The multivariable association with the highest number of covariates, lowest number of covariates, and unadjusted estimates and corresponding 95% CIs were extracted. DerSimonian and Laird random-effects meta-analyses were performed and yielded conservative confidence intervals around relative risks. The principal association measure was day care attendance vs no day care attendance and risk of type 1 diabetes. Results Seventeen articles including 22 observational studies of 100 575 participants were included in the meta-analysis. Among the participants, 3693 had type 1 diabetes and 96 882 were controls. An inverse association between day care attendance and risk of type 1 diabetes was found (combined odds ratio, 0.68; 95% CI, 0.58-0.79; P < .001; adjusted for all available confounders). When the 3 cohort studies included were analyzed separately, the risk of type 1 diabetes was 15% lower in the group attending day care; however, the difference was not statistically significant (odds ratio, 0.85; 95% CI, 0.59-1.12; P = .37).

    These results demonstrated that day care attendance appears to be associated with a reduced risk of type 1 diabetes. Increased contacts with microbes in children attending day care compared with children who do not attend day care may explain these findings. However, further prospective cohort studies are needed to confirm the proposed association.

    Reference:

    Tall S, Virtanen SM, Knip M. Day Care Attendance and Risk of Type 1 Diabetes: A Meta-Analysis and Systematic Review. JAMA Pediatr. 2024;178(12):1290–1297. doi:10.1001/jamapediatrics.2024.4361

    Powered by WPeMatico

    Cigarettes Smoking closely linked to risk of heart failure and associated hospitalization: JAHA

    A new study published in the Journal of American Heart Association showed that current smoking population was linked to a higher likelihood of hospitalization for heart failure (HF) and both of its subtypes (preserved and lowered ejection fraction) when compared to never smoking population.

    Globally, cardiovascular diseases (CVDs) constitute a leading cause of morbidity and death. Around the world, at least 26 million people suffer from heart failure (HF). It is estimated that 6.2 million individuals in the United States suffer with HF. In 2030, this number is expected to surpass 8 million, in part due to the aging population.

    Despite significant progress in recent decades, the prognosis for heart failure remains dire, with a 1-year death rate of around 30% following diagnosis. Heart failure incidents have been linked to cigarette smoking. However, it is unclear how smoking cigarettes and quitting smoking relate to different forms of heart failure, especially among Black individuals. Therefore, Daisuke Kamimura and team conducted this study in order to assess the effect of smoking cessation on heart failure.

    This Jackson Heart Study looked at 4189 Black volunteers (mean age 54 years, 64% female) who had no history of heart failure or coronary heart disease at baseline (never smokers n=2934, past smokers n=761, current smokers n=464). They looked at the relationship between smoking cigarettes and incidence hospitalization for heart failure as well as the two subtypes of heart failure (HF with decreased ejection fraction and HF with intact ejection fraction). Compared to never smoking, current smoking was linked to incident HF (both subtypes) after controlling for confounding variables.

    In comparison to those who never smoked, the ones who reported smoking intensity and the ones who reported smoking load were linked to a greater incidence of heart failure with intact ejection fraction. These correlations were not significantly impacted by baseline lung function assessed by spirometry.

    More years after quitting smoking reduced the chance of having heart failure (HF), and it took more than 20 years to achieve a risk equivalent to never smoking. Overall, cigarette smoking was linked to the development of both HF subtypes, and this relationship held true regardless of the effects on baseline lung function.

    Reference:

    Kamimura, D., Yimer, W. K., Mentz, R. J., Shah, A. M., White, W. B., Blaha, M. J., Oshunbade, A., Hamid, A., Suzuki, T., Clark, D. R., Fox, E. R., Correa, A., Butler, J., & Hall, M. E. (2024). Cigarette Smoking, Smoking Cessation, and Heart Failure Subtypes: Insights From the Jackson Heart Study. Journal of the American Heart Association, 13(23), e032921. https://doi.org/10.1161/JAHA.123.032921

    Powered by WPeMatico

    Early Surgical Menopause and HRT Linked to Increased Long-Term Risk of Cholecystectomy in Women: Study

    China: Surgical menopause at a young age significantly increases the risk of cholecystectomy, while natural menopause showed no such association, a recent study has revealed. This highlights the distinct health implications of surgically induced menopause compared to the natural process.

    The use of hormone replacement therapy (HRT) further elevated the risk, particularly in women who underwent surgical menopause and began HRT before menopause, with a hazard ratio of 2.28. Early surgical menopause and HRT use were independently identified as contributors to the increased risk of cholecystectomy. The findings were published online in Frontiers in Medicine on November 27, 2024.

    Women have a higher risk of gallbladder disease compared to men, indicating a potential influence of female hormones in its development. For this purpose, Guan-Jun Ding, The Second Department of General Surgery, Ningbo No. 9 Hospital, Ningbo, China, and colleagues aimed to evaluate menopausal characteristics, hormone replacement therapy, and their combined impact on women’s long-term risk of cholecystectomy.

    The study included 184,677 women from the UK Biobank. Multivariable Cox regression models were employed to analyze the associations between menopausal characteristics, HRT, and the risk of cholecystectomy. Additionally, the combined effects of HRT use, menopausal status, and menopause type on the incidence of cholecystectomy were assessed.

    The study led to the following findings:

    • Over a median follow-up of 12.7 years, 4,991 cholecystectomy cases were reported.
    • Natural menopause, regardless of the age at onset, was not linked to an increased risk of cholecystectomy.
    • Surgical menopause at a young age was associated with a higher risk of cholecystectomy.
    • Ever using hormone replacement therapy was associated with an elevated risk of cholecystectomy.
    • The highest risk was observed in women who underwent surgical menopause and started HRT before menopause, with a hazard ratio of 2.28, compared to naturally menopausal women who never used HRT.

    The study’s limitations include potential residual confounding and recall bias from self-reported data on menopause and HRT use. The prospective design likely reduces differential misclassification, but the lack of data on HRT formulations limits further analysis. The researchers focused on cholecystectomy without assessing broader gallbladder disease risk. Additionally, the study population, predominantly white, middle-aged, and older UK women, limits the generalizability of results to other groups.

    “Surgical menopause at a young age, rather than natural menopause, was linked to a higher risk of cholecystectomy. Independent of menopausal characteristics, HRT use increased this risk, particularly in surgically menopausal women initiating HRT before menopause,” the researchers concluded.

    Reference:

    Ding, G., Jiang, W., Lyu, J., Ma, J., Chen, G., Li, F., Yang, S., Miao, M., & Hua, Y. (2024). Menopausal characteristics and hormone replacement therapy about long-term risk of cholecystectomy in women. Frontiers in Medicine, 11, 1446271. https://doi.org/10.3389/fmed.2024.1446271

    Powered by WPeMatico