text
stringlengths 1.87k
18k
|
---|
Investigation of interfractional shape variations based on statistical point distribution model for prostate cancer radiation therapy.
The setup errors and organ motion errors pertaining to clinical target volume (CTV) have been considered as two major causes of uncertainties in the determination of the CTV-to-planning target volume (PTV) margins for prostate cancer radiation treatment planning. We based our study on the assumption that interfractional target shape variations are not negligible as another source of uncertainty for the determination of precise CTV-to-PTV margins. Thus, we investigated the interfractional shape variations of CTVs based on a point distribution model (PDM) for prostate cancer radiation therapy. To quantitate the shape variations of CTVs, the PDM was applied for the contours of 4 types of CTV regions (low-risk, intermediate- risk, high-risk CTVs, and prostate plus entire seminal vesicles), which were delineated by considering prostate cancer risk groups on planning computed tomography (CT) and cone beam CT (CBCT) images of 73 fractions of 10 patients. The standard deviations (SDs) of the interfractional random errors for shape variations were obtained from covariance matrices based on the PDMs, which were generated from vertices of triangulated CTV surfaces. The correspondences between CTV surface vertices were determined based on a thin-plate spline robust point matching algorithm. The systematic error for shape variations was defined as the average deviation between surfaces of an average CTV and planning CTVs, and the random error as the average deviation of CTV surface vertices for fractions from an average CTV surface. The means of the SDs of the systematic errors for the four types of CTVs ranged from 1.0 to 2.0 mm along the anterior direction, 1.2 to 2.6 mm along the posterior direction, 1.0 to 2.5 mm along the superior direction, 0.9 to 1.9 mm along the inferior direction, 0.9 to 2.6 mm along the right direction, and 1.0 to 3.0 mm along the left direction. Concerning the random errors, the means of the SDs ranged from 0.9 to 1.2 mm along the anterior direction, 1.0 to 1.4 mm along the posterior direction, 0.9 to 1.3 mm along the superior direction, 0.8 to 1.0 mm along the inferior direction, 0.8 to 0.9 mm along the right direction, and 0.8 to 1.0 mm along the left direction. Since the shape variations were not negligible for intermediate and high-risk CTVs, they should be taken into account for the determination of the CTV-to-PTV margins in radiation treatment planning of prostate cancer. |
Exercise therapy for chronic fatigue syndrome.
Chronic fatigue syndrome (CFS) is characterised by persistent, medically unexplained fatigue, as well as symptoms such as musculoskeletal pain, sleep disturbance, headaches and impaired concentration and short-term memory. CFS presents as a common, debilitating and serious health problem. Treatment may include physical interventions, such as exercise therapy, which was last reviewed in 2004. The objective of this review was to determine the effects of exercise therapy (ET) for patients with CFS as compared with any other intervention or control.• Exercise therapy versus 'passive control' (e.g. treatment as usual, waiting-list control, relaxation, flexibility).• Exercise therapy versus other active treatment (e.g. cognitive-behavioural therapy (CBT), cognitive treatment, supportive therapy, pacing, pharmacological therapy such as antidepressants).• Exercise therapy in combination with other specified treatment strategies versus other specified treatment strategies (e.g. exercise combined with pharmacological treatment vs pharmacological treatment alone). We searched The Cochrane Collaboration Depression, Anxiety and Neurosis Controlled Trials Register (CCDANCTR), the Cochrane Central Register of Controlled Trials (CENTRAL) and SPORTDiscus up to May 2014 using a comprehensive list of free-text terms for CFS and exercise. We located unpublished or ongoing trials through the World Health Organization (WHO) International Clinical Trials Registry Platform (to May 2014). We screened reference lists of retrieved articles and contacted experts in the field for additional studies Randomised controlled trials involving adults with a primary diagnosis of CFS who were able to participate in exercise therapy. Studies had to compare exercise therapy with passive control, psychological therapies, adaptive pacing therapy or pharmacological therapy. Two review authors independently performed study selection, risk of bias assessments and data extraction. We combined continuous measures of outcomes using mean differences (MDs) and standardised mean differences (SMDs). We combined serious adverse reactions and drop-outs using risk ratios (RRs). We calculated an overall effect size with 95% confidence intervals (CIs) for each outcome. We have included eight randomised controlled studies and have reported data from 1518 participants in this review. Three studies diagnosed individuals with CFS using the 1994 criteria of the Centers for Disease Control and Prevention (CDC); five used the Oxford criteria. Exercise therapy lasted from 12 to 26 weeks. Seven studies used variations of aerobic exercise therapy such as walking, swimming, cycling or dancing provided at mixed levels in terms of intensity of the aerobic exercise from very low to quite rigorous, whilst one study used anaerobic exercise. Control groups consisted of passive control (eight studies; e.g. treatment as usual, relaxation, flexibility) or CBT (two studies), cognitive therapy (one study), supportive listening (one study), pacing (one study), pharmacological treatment (one study) and combination treatment (one study). Risk of bias varied across studies, but within each study, little variation was found in the risk of bias across our primary and secondary outcome measures.Investigators compared exercise therapy with 'passive' control in eight trials, which enrolled 971 participants. Seven studies consistently showed a reduction in fatigue following exercise therapy at end of treatment, even though the fatigue scales used different scoring systems: an 11-item scale with a scoring system of 0 to 11 points (MD -6.06, 95% CI -6.95 to -5.17; one study, 148 participants; low-quality evidence); the same 11-item scale with a scoring system of 0 to 33 points (MD -2.82, 95% CI -4.07 to -1.57; three studies, 540 participants; moderate-quality evidence); and a 14-item scale with a scoring system of 0 to 42 points (MD -6.80, 95% CI -10.31 to -3.28; three studies, 152 participants; moderate-quality evidence). Serious adverse reactions were rare in both groups (RR 0.99, 95% CI 0.14 to 6.97; one study, 319 participants; moderate-quality evidence), but sparse data made it impossible for review authors to draw conclusions. Study authors reported a positive effect of exercise therapy at end of treatment with respect to sleep (MD -1.49, 95% CI -2.95 to -0.02; two studies, 323 participants), physical functioning (MD 13.10, 95% CI 1.98 to 24.22; five studies, 725 participants) and self-perceived changes in overall health (RR 1.83, 95% CI 1.39 to 2.40; four studies, 489 participants). It was not possible for review authors to draw conclusions regarding the remaining outcomes.Investigators compared exercise therapy with CBT in two trials (351 participants). One trial (298 participants) reported little or no difference in fatigue at end of treatment between the two groups using an 11-item scale with a scoring system of 0 to 33 points (MD 0.20, 95% CI -1.49 to 1.89). Both studies measured differences in fatigue at follow-up, but neither found differences between the two groups using an 11-item fatigue scale with a scoring system of 0 to 33 points (MD 0.30, 95% CI -1.45 to 2.05) and a nine-item Fatigue Severity Scale with a scoring system of 1 to 7 points (MD 0.40, 95% CI -0.34 to 1.14). Serious adverse reactions were rare in both groups (RR 0.67, 95% CI 0.11 to 3.96). We observed little or no difference in physical functioning, depression, anxiety and sleep, and we were not able to draw any conclusions with regard to pain, self-perceived changes in overall health, use of health service resources and drop-out rate.With regard to other comparisons, one study (320 participants) suggested a general benefit of exercise over adaptive pacing, and another study (183 participants) a benefit of exercise over supportive listening. The available evidence was too sparse to draw conclusions about the effect of pharmaceutical interventions. Patients with CFS may generally benefit and feel less fatigued following exercise therapy, and no evidence suggests that exercise therapy may worsen outcomes. A positive effect with respect to sleep, physical function and self-perceived general health has been observed, but no conclusions for the outcomes of pain, quality of life, anxiety, depression, drop-out rate and health service resources were possible. The effectiveness of exercise therapy seems greater than that of pacing but similar to that of CBT. Randomised trials with low risk of bias are needed to investigate the type, duration and intensity of the most beneficial exercise intervention. |
[CORRECTION OF THORACOLUMBAR KYPHOSCOLIOSIS BY MODIFIED "EGGSHELL" OSTEOTOMY].
To evaluate the effectiveness of modified "eggshell" osteotomy for the treatment of thoracolumbar kyphoscoliosis. METHODS Between April 2009 and June 2014, 19 patients with spinal deformity underwent modified "eggshell" osteotomy consisting of preserving posterior bony structures initially and enlarging surgical field for cancellous bone removal. There were 14 males and 5 females with an average age of 37.8 years (range, 18-76 years) and with a median disease duration of 7 years (range, 1-40 years). The disease causes included ankylosing spondylitis in 13 cases, spinal tuberculosis in 3 cases, and chronic vertebral compression fracture in 3 cases. Eleven patients showed single kyphosis and 8 patients had kyphoscoliosis. Preoperative Cobb angle of kyphosis was (64.2 ±30.1)degrees, while Cobb angle of scoliosis was (19.9 ± 12.8)degrees. Apical vertebraes were T10 in 1 case, L1 in 3 cases, L2 in 7 cases, T10, 11 in 2 cases, T12, L1 in 4 cases, T12-L2 in 1 case, and T10-L1 in 1 case. Preoperative visual analogue scale (VAS) and Japanese Orthopaedic Association (JOA) score were 6.1 ± 1.9 and 15.2 ± 5.6, respectively. According to Frankel criteria for spinal cord function, 16 cases were rated as grade E and 3 cases as grade D before operation. Cobb angle, VAS, and JOA scors were used to assess relief of symptom. The operation time was 215-610 minutes (mean, 343 minutes); intraoperative blood loss ranged from 900 to 3 000 mL (mean, 1 573 mL). All incisions healed primarily. Delayed onset ischemia-reperfusion injury of spinal cord occurred in 1 case at 6 days after operation, and symptoms alleviated after conservative treatments. All 19 cases were followed up 14-76 months (mean, 46 months). No loosening or breakage of internal fixation was observed during follow-up. Cobb angle of kyphosis, Cobb angle of scoliosis, VAS and JOA scores at 1 week after operation and last follow-up were significantly improved when compared with preoperative ones (P < 0.05). VAS and JOA scores at last follow-up were significantly improved when compared with scores at 1 week after operation (P < 0.05), but no significant difference was found in Cobb angle of both kyphosis and scoliosis between at 1 week after operation and at last follow-up (P > 0.05). At 1 week after operation, the correction rate for kyphosis was 34.1%-93.4% (mean, 62.2%), and the correction rate for scoliosis was 42.4%-100% (mean, 68.9%). At 48 months after operation, 3 patients with preoperative impaired spinal cord function achieved full recovery. Modified "eggshell" osteotomy owns the advantages of shorter operation time and less intraoperative blood loss, thus it is able to correct thoracolumbar kyphoscoliosis safely and effectively. |
In vitro, Candida albicans releases the immune modulator adenosine and a second, high-molecular weight agent that blocks neutrophil killing.
We previously described a lyophilized supernatant from germinated Candida albicans that blocks human neutrophil (PMN) O2- production and degranulation stimulated by several PMN agonists but does not block stimulation by PMA. In studies to further characterize this Candida hyphal inhibitory product (CHIP), we noted several physicochemical parallels with the purine nucleoside adenosine (Ado). A Sephadex G-10 semipurified fraction of CHIP had an absorption peak near 260 nm, an apparent m.w. of less than 400, and was resistant to boiling and proteases. Maximally effective doses of CHIP (100 micrograms/ml) and Ado (100 microM) blocked 0.1 microM FMLP-stimulated O2- production by 76.8 +/- 4.1 and 81.7 +/- 4.8%, respectively. Ado deaminase, known to inactivate Ado, reversed inhibition by both Ado and CHIP. Results were comparable for the effect of CHIP and Ado on FMLP-stimulated beta-glucuronidase and lactoferrin release. Activation of the respiratory burst by opsonized C. albicans yeast was also inhibited by CHIP and Ado, but the extent of inhibition was less than for FMLP. At yeast:PMN ratios of 4:1, 10:1, and 40:1, CHIP inhibited O2- by -3.8%, 14.3%, and 12.8%, respectively; Ado blocked production by 32.9%, 24.2%, and 11.5%, respectively. The effect of CHIP and Ado on Candida killing by PMN was compared using two viability assays in each of four experiments. Ado (100 microM) had no effect on killing, although CHIP (100 micrograms/ml) inhibited killing in the MTT assay at 15 and 45 min by 81.6 +/- 6.3 and 24.7 +/- 6.2%, respectively; as assayed by CFU, CHIP inhibited killing by 34.1 +/- 6.2 and 10.3 +/- 2.5%, respectively. The ability of CHIP to inhibit killing was not affected by adding Ado deaminase, providing additional evidence that an Ado-like effect by CHIP is not essential for killing inhibition. Killing of opsonized Streptococcus pneumoniae was also inhibited in a concentration-dependent manner. Reverse-phase HPLC of the semipurified fraction revealed a peak, eluting identically to authentic Ado, which was eliminated by adding Ado deaminase. Ado content of the G-10 fraction was sufficient to account fully for the FMLP-inhibitory activity. The antikilling activity was resistant to boiling and proteases but was eliminated by mild periodation. Fractions eluting from a Sephadex CL6B column between 0.8 and 2.0 x 10(6) m.w. had increased sp. act. for killing inhibition. Sp. act. increased as carbohydrate content increased, but killing inhibition by various Candida cell wall constituents was absent to modest compared to inhibition induced by CHIP.(ABSTRACT TRUNCATED AT 400 WORDS) |
[Compliance of brain--Part 2 Approach from the local elastic and viscous moduli (author's transl)].
It is important to have information about physical property of the brain in order to elucidate both the physical changes of the morbid brain and the physical mechanism of the traumatic brain injury. Under the hypothesis that reaction of the active brain to the dynamic load can be compared to the Maxwell-Voigt three dimensional model, elastic property of the brain was obtained as the Young's modulus (E: 10(-2) Kgf/cm2) of which error was less than 10%, and viscous property of the brain as the Viscous modulus (eta: 10(-2) Kgf.sec/cm2). And it was confirmed that the reactive pressure of the brain to dynamic load came from the surface to about 15 mm depth of the brain. In this report, experiments were done on the alive normal brains, the edematous ones nd necrotic ones which were produced by the cold injury (dry ice-aceton) in dogs (9.0--16.0 Kg). In the normal brain, E = 3.24 +/- 0.25, eta = 1.10 +/- 0.37 and these moduli were also stable when the physical conditions of the brain were stable. Under the dehydration by 20% mannitol, E increased in its value (p less than 0.01). But under the hydration by 5% glucose, E did not change at all. In the edematous brain, E = 3.28 +/- 0.44, eta = 1.74 +/- 0.06 and E of the edematous brain was almost same as that of normal ones, but under the dehydration, E of the edematous brain decreased (p less than 0.10), on the other hand it increased in its value under the hydration (p less than 0.05). In the necrotic brain, E = 1.60 +/- 0.14, eta = 0.82 +/- 0.28. Both moduli were of lower values and moreover they did not change its values at all under dehydration and hydration. As Young's modulus is the elastic index of the brain, the converse (1/E) should be compliance of the brain, that is to say, buffer effect of the brain. As for the compliance, the necrotic brain has maximum buffer effect and the over-hydrated edematous brain and the dehydrated normal ones have minimum buffer effect. From analysing the changes of the viscous moduli, it became clear that the viscous moduli took quite different functions in alive brains and in fatal ones, and it was suspected that the alive brain might not be so simple in its viscoelastic property. |
Effect of orlistat, micronised fenofibrate and their combination on metabolic parameters in overweight and obese patients with the metabolic syndrome: the FenOrli study.
Obesity is becoming increasingly common worldwide and is strongly associated with the metabolic syndrome (MetS). MetS is considered to be a cluster of risk factors that increase the risk of vascular events. In an open-label randomised study (the FenOrli study) we assessed the effect of orlistat and fenofibrate treatment, alone or in combination on reversing the diagnosis of the MetS (primary end-point) as well as on anthropometric and metabolic parameters (secondary end-points) in overweight and obese patients with MetS but no diabetes. Overweight and obese patients (N = 89, body mass index (BMI) > 28 kg/m2) with MetS [as defined by the National Cholesterol Education Program (NCEP) Adult Treatment Panel (ATP) III criteria] participated in the study. Patients were prescribed a low-calorie low-fat diet and were randomly allocated to receive orlistat 120 mg three times a day (tid) (O group), micronised fenofibrate 200 mg/day (F group), or orlistat 120 mg tid plus micronised fenofibrate 200 mg/day (OF group). Body weight, BMI, waist circumference, blood pressure, serum total cholesterol (TC), low-density lipoprotein cholesterol (LDL-C), high-density lipoprotein cholesterol (HDL-C), non-HDL-C, triglyceride, creatinine (SCr) and uric acid (SUA) levels, as well as homeostasis model assessment (HOMA) index and liver enzyme activities were measured at baseline and after 3 months of treatment. Of the 89 patients enrolled, three (one in each group) dropped out during the study due to side effects. After the 3-month treatment period, 43.5% of patients in the O group, 47.6% in the F group and 50% in the OF group no longer met the MetS diagnostic criteria (primary end-point, p < 0.0001 vs. baseline in all treatment groups). No significant difference in the primary end-point was observed between the three treatment groups. Significant reductions in body weight, BMI, waist circumference, blood pressure, TC, LDL-C, non-HDL-C, triglyceride and SUA levels, as well as gamma-glutamyl transpeptidase activity and HOMA index were observed in all treatment groups. In the OF group a greater decrease in TC (-26%) and LDL-C (-30%) was observed compared with that in the O and F groups (p < 0.01) and a more pronounced reduction of triglycerides (-37%) compared with that in the O group (p < 0.05). SUA levels and alkaline phosphatase activity decreased more in the F and OF groups compared with the O group (p < 0.05). Moreover, SCr significantly increased and estimated creatinine clearance decreased in the F and OF groups but they were not significantly altered in the O group (p < 0.01 for the comparison between O and either F or OF groups). Glucose (in groups O and OF), as well as insulin levels and HOMA index (in all groups), were significantly reduced after treatment (p < 0.05 vs. baseline). The combination of orlistat and micronised fenofibrate appears to be safe and may further improve metabolic parameters in overweight and obese patients with MetS compared with each monotherapy. |
The role of pharmacogenetics in the disposition of and response to tacrolimus in solid organ transplantation.
The calcineurin inhibitor tacrolimus is the backbone of immunosuppressive drug therapy after solid organ transplantation. Tacrolimus is effective in preventing acute rejection but has considerable toxicity and displays marked inter-individual variability in its pharmacokinetics and pharmacodynamics. The genetic basis of these phenomena is reviewed here. With regard to its pharmacokinetic variability, a single nucleotide polymorphism (SNP) in cytochrome P450 (CYP) 3A5 (6986A>G) has been consistently associated with tacrolimus dose requirement. Patients expressing CYP3A5 (those carrying the A nucleotide, defined as the *1 allele) have a dose requirement that is around 50 % higher than non-expressers (those homozygous for the G nucleotide, defined as the *3 allele). A randomised controlled study in kidney transplant recipients has demonstrated that a CYP3A5 genotype-based approach to tacrolimus dosing leads to more patients reaching the target concentration early after transplantation. However, no improvement of clinical outcomes (rejection incidence, toxicity) was observed, which may have been the result of the design of this particular study. In addition to CYP3A5 genotype, other genetic variants may also contribute to the variability in tacrolimus pharmacokinetics. Among these, the CYP3A4*22 and POR*28 SNPs are the most promising. Individuals carrying the CYP3A4*22 T-variant allele have a lower tacrolimus dose requirement than individuals with the CYP3A4*22 CC genotype and this effect appears to be independent of CYP3A5 genotype status. Individuals carrying the POR*28 T-variant allele have a higher tacrolimus dose requirement than POR*28 CC homozygotes but this association was only found in CYP3A5-expressing individuals. Other, less well-defined SNPs have been inconsistently associated with tacrolimus dose requirement. It is envisaged that in the future, algorithms incorporating clinical, demographic and genetic variables will be developed that will aid clinicians with the determination of the tacrolimus starting dose for an individual transplant recipient. Such an approach may limit early tacrolimus under-exposure and toxicity. With regard to tacrolimus pharmacodynamics, no strong genotype-phenotype relationships have been identified. Certain SNPs associate with rejection risk but these observations await replication. Likewise, the genetic basis of tacrolimus-induced toxicity remains unclarified. SNPs in the genes encoding for the drug transporter ABCB1 and the CYP3A enzymes may relate to chronic nephrotoxicity but findings have been inconsistent. No genetic markers reliably predict new-onset diabetes mellitus after transplantation, hypertension or neurotoxicity. The CYP3A5*1 SNP is currently the most promising biomarker for tailoring tacrolimus treatment. However, before CYP3A5 genotyping is incorporated into the routine clinical care of transplant recipients, prospective clinical trials are needed to determine whether such a strategy improves patient outcomes. The role of pharmacogenetics in tacrolimus pharmacodynamics should be explored further by the study of intra-lymphocyte and tissue tacrolimus concentrations. |
Integrated approaches to improve birth outcomes: perinatal periods of risk, infant mortality review, and the Los Angeles Mommy and Baby Project.
This article provides an example of how Perinatal Periods of Risk (PPOR) can provide a framework and offer analytic methods that move communities to productive action to address infant mortality. Between 1999 and 2002, the infant mortality rate in the Antelope Valley region of Los Angeles County increased from 5.0 to 10.6 per 1,000 live births. Of particular concern, infant mortality among African Americans in the Antelope Valley rose from 11.0 per 1,000 live births (7 cases) in 1999 to 32.7 per 1,000 live births (27 cases) in 2002. In response, the Los Angeles County Department of Public Health, Maternal, Child, and Adolescent Health Programs partnered with a community task force to develop an action plan to address the issue. Three stages of the PPOR approach were used: (1) Assuring Readiness; (2) Data and Assessment, which included: (a) Using 2002 vital records to identify areas with the highest excess rates of feto-infant mortality (Phase 1 PPOR), and (b) Implementing Infant Mortality Review (IMR) and the Los Angeles Mommy and Baby (LAMB) Project, a population-based study to identify potential factors associated with adverse birth outcomes. (Phase 2 PPOR); and (3) Strategy and Planning, to develop strategic actions for targeted prevention. A description of stakeholders' commitments to improve birth outcomes and monitor infant mortality is also given. The Antelope Valley community was engaged and ready to investigate the local rise in infant mortality. Phase 1 PPOR analysis identified Maternal Health/Prematurity and Infant Health as the most important periods of risk for further investigation and potential intervention. During the Phase 2 PPOR analyses, IMR found a significant proportion of mothers with previous fetal loss (45%) or low birth weight/preterm (LBW/PT) birth, late prenatal care (39%), maternal infections (47%), and infant safety issues (21%). After adjusting for potential confounders (maternal age, race, education level, and marital status), the LAMB case-control study (279 controls, 87 cases) identified additional factors associated with LBW births: high blood pressure before and during pregnancy, pregnancy weight gain falling outside of the recommended range, smoking during pregnancy, and feeling unhappy during pregnancy. PT birth was significantly associated with having a previous LBW/PT birth, not taking multivitamins before pregnancy, and feeling unhappy during pregnancy. In response to these findings, community stakeholders gathered to develop strategic actions for targeted prevention to address infant mortality. Subsequently, key funders infused resources into the community, resulting in expanded case management of high-risk women, increased family planning services and local resources, better training for nurses, and public health initiatives to increase awareness of infant safety. Community readiness, mobilization, and alignment in addressing a public health concern in Los Angeles County enabled the integration of PPOR analytic methods into the established IMR structure and [the design and implementation of a population-based l study (LAMB)] to monitor the factors associated with adverse birth outcomes. PPOR proved an effective approach for identifying risk and social factors of greatest concern, the magnitude of the problem, and mobilizing community action to improve infant mortality in the Antelope Valley. |
Effects of collection time on flow of chromium and dry matter and on basal ileal endogenous losses of amino acids in growing pigs.
The objectives of this experiment were to examine the diurnal patterns of chromium and DM flow at the distal ileum of pigs and to determine the effect of collection time on basal ileal endogenous losses (BEL) of CP and AA. Eight barrows with an initial BW of 34.6kg (SD = 2.1) were individually fitted with a T-cannula in the distal ileum and randomly allotted to a replicated 4× 4 Latin square design with 4 diets and 4 periods in each square. Three diets contained either corn, soybean meal, or distillers dried grains with solubles as the sole source of CP and AA. An N-free diet was also prepared. All diets contained 0.5% chromic oxide as an indigestible marker. Equal meals were provided at 0800 and 2000 h. Ileal digesta samples were collected in 2-h intervals from 0800 to 2000 h during the last 3 d of each 7-d period. The concentration of Cr in ileal digesta samples collected in each of the six 2-h periods exhibited a quadratic effect ( < 0.01) that increased and then decreased in pigs fed the CP containing diets. However, the concentration of Cr in ileal digesta collected in each of the six 2-h periods from pigs fed the N-free diet increased (linear, < 0.01). These differences were possibly related to differences in DM flow, because DM flow to the distal ileum had a pattern that was opposite of that observed for the concentration of Cr in the ileal digesta samples. The BEL of all indispensable AA and the sum of indispensable AA from pigs fed the N-free diet decreased (linear, < 0.05) in each of the six 2-h periods, with the exception that the BEL of Arg increased and then decreased (quadratic, < 0.05). The BEL of Asp, Cys, Glu, Ser, and Tyr also decreased (linear, < 0.05) during each of the six 2-h periods, whereas the BEL of Pro and the sum of dispensable AA increased and then decreased (quadratic, < 0.05) over the 12 h. Collection time did not affect BEL of CP. No differences were observed in the concentration of Cr, flow of DM, or basal endogenous loss of all AA if ileal digesta samples were collected over 6-, 8-, or 10-h periods, compared with collection over a 12-h period. In conclusion, diurnal variation of Cr concentration, DM flow, and the BEL of all AA were observed, but 4 to 6 h of ileal sample collection starting 4 or 6 h after feeding may provide representative samples that allow for calculation of accurate values for CP and AA digestibility. |
Effect of prehabilitation on the outcome of anterior cruciate ligament reconstruction.
Prehabilitation is defined as preparing an individual to withstand a stressful event through enhancement of functional capacity. We hypothesized that a preoperative exercise program would enhance postoperative outcomes after anterior cruciate ligament reconstruction (ACLR). Randomized controlled clinical trial; Level of evidence, 1. Twenty volunteers awaiting ACLR were randomly assigned to a control or exercise intervention group. The exercise group completed a 6-week gym- and home-based exercise program. Assessments include single-legged hop test; quadriceps and hamstring peak torque and magnetic resonance imaging cross-sectional area (CSA); Modified Cincinnati Knee Rating System score; and muscle biopsy of the vastus lateralis muscle completed at baseline, preoperatively, and 12 weeks postoperatively. Myosin heavy chain (MHC) isoforms protein and messenger RNA (mRNA) expression were determined with SDS-PAGE (sodium dodecyl sulfate polyacrylamide gel electrophoresis) and RT-PCR (real-time polymerase chain reaction), respectively; IGF-1 (insulin-like growth factor 1), MuRF-1 (muscle RING-finger protein-1), and MAFbx (muscle atrophy f-box) mRNA expression were determined with quantitative RT-PCR. Following 6 weeks of exercise intervention, the single-legged hop test results improved significantly in the exercise-injured limb compared with baseline (P = .001). Quadriceps peak torque in the injured limb improved with similar gains in CSA compared with baseline (P = .001). However, this was not significantly increased compared with the control group. Quadriceps and vastus medialis CSA were also larger in the exercise group than in controls (P = .0024 and P = .015, respectively). The modified Cincinnati score was better in the exercise-injured limb compared with baseline. At 12 weeks postoperatively, the rate of decline in the single-legged hop test was reduced in the exercise group compared with controls (P = .001). Similar trends were not seen for quadriceps peak torque and CSA. The vastus medialis CSA had regressed to similar levels as the control group (P = .008). The modified Cincinnati score continued to increase in the exercise group compared with controls (P = .004). The expression of the hypertrophic IGF-1 gene was significantly increased after the exercise intervention (P = .028), with a decrease back to baseline 12 weeks postoperatively (P = .012). Atrophic MuRF-1 gene expression was decreased after intervention compared with baseline (P = .05) but increased again at 12 weeks postoperatively (P = .03). The MAFbx levels did not change significantly in either group and within each time point. On the mRNA level, there was a shift from MHC-IIx isoform to MHC-IIa after exercise, with significant changes compared with control preoperatively (P = .028). Protein testing was able to reproduce this increase for MHC-IIa isoform expression only. The 6-week progressive prehabilitation program for subjects undergoing ACLR led to improved knee function based on the single-legged hop test and self-reported assessment using the modified Cincinnati score. These effects were sustained at 12 weeks postoperatively. This study supports prehabilitation as a consideration for patients awaiting ACLR; however, further studies are warranted. |
[Minimally invasive osteosynthesis of pilon fractures].
The primary aim of minimally invasive osteosynthesis (MIO) is the anatomical reconstruction of the distal tibial articular surface, with preservation of the soft tissue to allow early functional postoperative management. This should lead to a normal bone healing and recovery without arthrosis. Fractures type Rüedi I + II or AO 43-B1, -B2, AO 43-C1, -C2, rare and relative indications are fractures of type AO 43-B3 and -C3 fractures without IIb and III° soft tissue injuries. It may also be used as an additional technique for osteosynthesis with external fixators. Severe comminuted fractures of the pilon with closed or open II and III° soft tissue damage. Severe soft tissue damage (III°). An intensive preoperative analysis of conventional X-rays and CT images is necessary to support the indication for MIO of pilon fractures. The first step is reduction of the fracture with axial traction, in some cases with a distractor or external fixator. The definitive reduction is performed with K-wire joysticks or reduction clamps. The key step is the intraoperative X-ray control of the reduction in various planes, if possible with 3D reconstruction. An alternative is also arthroscopic control of the articular reduction. All manipulations are performed via small incisions. After incision of the skin, all layers of soft tissue are smoothly divided with scissors allowing the soft tissue including vessels and nerves to be moved out of the working channel. All instruments and implants (e.g., K-wires, drill sleeve, screws) are introduced between the opened scissor branches. After lag screw osteosynthesis with 3.5 or 4.5 mm conventional screws, the articular block is reduced to the diaphysis and fixed with a minimally invasively inserted plate. Under X-ray control in two planes, the plate is adjusted into position and preliminarily fixed with K-wires. The screws are inserted using the minimally invasive technique. Immediate mobilization starting on day 1 with partial weight bearing (sole contact or 12-15 kg) for 4-6 weeks, postoperative protection with orthesis or split cast for 2-5 days depending on degree of swelling, early functional physiotherapy, thrombosis prophylaxis with heparin until complete mobilization. Full weight-bearing depending on fracture type after 6-8 weeks. Advantages of minimally invasive osteosynthesis of pilon fractures compared to conventional open reduction and osteosynthesis, include protection of the soft tissue and no further disturbances of circulation-ideal prerequisites for undisturbed bone healing. In 129 patients after osteosynthesis of pilon fracture, no reoperations were necessary when using MIO, but reoperation was necessary with other techniques in 17.6% of all patients. In addition, no infections were observed with MIO vs. 13.4% of patients with other techniques. The average Olerud/Molander Score was 95 points for the MIO group vs. 58.91 points for all patients treated, while MIO plus an external fixator received a score of 50 points. The average Ankle Hindfoot Score was 64.9 points, for MIO 87.5 points, and for operations consisting of MIO plus an external fixator 58 points. |
In vitro fertilisation and stage 3 retinopathy of prematurity.
To re-examine the risk of children born by assisted conception developing stage 3 retinopathy of prematurity (ROP) and to define whether the risk of ROP varies with the method of assisted conception. This was a retrospective study carried out between December 1995 and December 1998 of infants in a single neonatal unit serving the Brent and Harrow area of North West Thames requiring screening and treatment of ROP. The infants screened were identified from the ROP screening database. Those conceived by in vitro fertilisation (IVF) and other forms of assisted conception were identified by reviewing the neonatal notes and the maternal obstetric records. Birth weight, gestational age and the type of assisted conception were recorded. The presence or absence of any stage of ROP, its location and severity and the cases requiring treatment were recorded. One hundred and seventy-nine infants fulfilled the screening criteria during this period. Acute ROP was detected in 32.4% (58 infants) and stage 3 ROP developed in 15.6% (28 infants). Twenty-one infants (11.7%) were born after assisted conception, with 12 (6.7%) being conceived by IVF. The others were conceived on clomiphene (8) or after intrauterine insemination (1). Assisted conception accounted for 21.4% of all those reaching stage 3 disease and 28.6% of those infants requiring treatment. Of the 12 infants conceived by IVF, 41.6% (5 infants) developed acute ROP which progressed to threshold ROP in all infants (100%). Of the assisted conception babies requiring treatment for ROP, 83.3% were conceived by IVF. The other child had been conceived on clomiphene. The gestational age and birth weight of the IVF infants reaching stage 3 ROP were 26.6 +/- 0.89 weeks and 937 +/- 170.2 g. The gestational age and birth weight in the rest of the infants reaching stage 3 ROP were lower than in those conceived by assisted conception (25.739 +/- 1.13 weeks and 735.29 +/- 117.70 g); however, this did not approach statistical significance (p = 0.35 and p = 0.13, respectively). In this study 11.7% of the group requiring screening were conceived by assisted conception. Of all babies requiring treatment for ROP, 28.6% were born after assisted conception. Of the assisted conception group, 83.3% were conceived by IVF. Assisted conception using IVF rather than other techniques appears to be the major risk factor for the development of threshold ROP. We would advise increased vigilance when screening babies conceived by the IVF methods of assisted conception. |
Patient screening and medical evaluation for implant and preprosthetic surgery.
Implant and preprosthetic surgeries aim to restore normal anatomic contours, function, comfort, aesthetics, and oral health. As such, they are not life-saving procedures. The prime concern must therefore be to not undermine the patient's overall health and safety. It is then that every step must be taken to select the appropriate treatment plan and maximize the longevity of the implanted system, including the overlying prostheses. One important category into which a number of possible complications may fall is the inadequate systemic screening of patients prior to implant and biomaterial insertion. Without wishing to enter into the whole human pathology, it is no longer appropriate to limit the general contraindications of implantology to the traditionally considered malfunctions of the pancreas, liver or hematopoietic system and to ignore the devastating long-term effect of smoking or inadequate dietary habits. There are, in fact, a number of systemic problems that may create major risk factors. On the other hand, modern standards of care should not systematically exclude the use of implant surgery on patients with relative or marginal health conditions without exploring the possibilities of improving and stabilizing those conditions. As newer techniques of general anesthesia and intravenous sedation are more frequently used on an ambulatory basis, allowing implant surgeons to take their patients into various degrees of consciousness or deep sedation, the patient screening should also take into consideration factors related to this form of management. An arbitrary guideline for patient selection may be based on the classification of the American Society of Anesthesiology. This guideline restricts (with very few exceptions) intraosseous implants and implant-related graft surgeries on patients who fall into ASA1 or ASA2 categories of the classification. In the domain of subperiosteal implants for treatment of advanced atrophy of the mandible, the body response seems to be much less dramatic than to endosseous devices or to grafted sites. The cortical histoarchitecture and metabolism are, by far, less affected by organ disorders than are endosseous structures. This article presents a number of absolute contraindications and analyzes a series of relative contraindications for which the doctor's judgment remains the decisive factor. In this latter case, it proposes treatment patterns that could optimize certain marginal health conditions or stabilize unbalanced biological functions prior to or at the time of surgery. As life expectancy in the industrial countries is continuously increasing, a greater number of elderly patients are equipped with implant-supported prosthetics. The effort must therefore be focused on keeping a regular and watchful eye on their general health and screening for possible geriatric conditions responsible for long-term implant failure. Will a minimum knowledge of internal medicine be a prerequisite for future academic implant education? |
Adalimumab: a review of its use in rheumatoid arthritis.
Adalimumab (Humira) is a recombinant, fully human IgG1 monoclonal antibody that binds specifically to tumor necrosis factor (TNF)-alpha, thereby neutralizing the activity of the cytokine. Subcutaneous adalimumab has been investigated in well designed trials in patients with active rheumatoid arthritis despite treatment with disease-modifying antirheumatic drugs (DMARDs). Patients receiving adalimumab 40mg every other week in combination with methotrexate (Anti-TNF Research Study Program of the Monoclonal Antibody Adalimumab [ARMADA] and DE019 trials) or standard antirheumatic therapy (Safety Trial of Adalimumab in Rheumatoid Arthritis [STAR] trial) for 24-52 weeks had significantly higher American College of Rheumatology (ACR) 20, ACR50, and ACR70 response rates than patients receiving placebo plus methotrexate or standard antirheumatic therapy. In ARMADA, an ACR20 response was achieved in 25%, 52%, and 67% of adalimumab plus methotrexate recipients at weeks 1, 4, and 24, respectively. In ARMADA and DE019, improvements in the individual components of the ACR response were significantly greater with adalimumab 40mg every other week plus methotrexate than with placebo plus methotrexate. Monotherapy with adalimumab 40mg every other week was associated with significantly higher ACR20, ACR50, and ACR70 response rates than placebo, as well as significantly greater improvements in the individual components of the ACR response. ACR responses were sustained with adalimumab according to the results of extension studies in which patients received adalimumab in combination with methotrexate (up to 30 months) or as monotherapy (up to 5 years). In both concomitant therapy and monotherapy trials, adalimumab was associated with significantly greater improvements from baseline in health-related quality of life (HR-QOL) measures than placebo; adalimumab also retarded the radiographic progression of structural joint damage to a significant extent compared with placebo. Adalimumab was generally well tolerated as both concomitant therapy and monotherapy. In ARMADA, there were no significant differences between adalimumab and placebo (in combination with methotrexate) in the incidence of adverse events; however, in STAR, the incidence of injection site reactions, rash, and back pain was significantly higher with adalimumab than with placebo (in combination with standard antirheumatic therapy). No cases of tuberculosis were reported in either trial.In conclusion, subcutaneous adalimumab in combination with methotrexate or standard antirheumatic therapy, or as monotherapy, is effective in the treatment of adults with active rheumatoid arthritis who have had an inadequate response to DMARDs. Adalimumab has a rapid onset of action and sustained efficacy. The drug also retards the progression of structural joint damage, improves HR-QOL, and is generally well tolerated. Thus, adalimumab is a valuable new option for the treatment of DMARD-refractory adult rheumatoid arthritis. |
Laparoscopic approach to L4-L5 for interbody fusion using BAK cages: experience in the first 58 cases.
Operative reports were reviewed for patients who underwent laparoscopic fusion at the L4-L5 level and information regarding the mobilization of the vessels was recorded. The purpose of this study was to describe variations in the approach used to address anatomical variations in the location of the great vessel bifurcation in the region of the L4-L5 intervertebral disc space when performing laparoscopic interbody fusion procedures. Recent interest in laparoscopic spine surgery using threaded cages has resulted in questions regarding the ability to safely access the L4-L5 disc using this approach. The laparoscopic transperitoneal approach to L5-S1 is below the bifurcation of the great vessels, thus requiring minimal mobilization of the iliac vessels. However, the transperitoneal approach to L4-L5 may be complicated by the bifurcation of the great vessels anterior to this disc space. Difficulty in placing two cages may occur if the vessels cannot be adequately mobilized. Data were collected for the consecutive series of the first 58 patients (40 males, 18 females; mean age 42.5 years) undergoing laparoscopic anterior lumbar interbody fusion (ALIF) at the L4-L5 level using BAK cages. Operative notes were reviewed to determine variations in the operative approach. In particular, it was recorded if the L4-L5 disc was accessed above, or below the bifurcation of the aorta and the vena cava, or between these structures. The blood loss, operative time, and length of hospitalization were compared with respect to approach variation. In 30 patients, the L4-L5 disc was accessed above the great vessel bifurcation, in 18 patients below the bifurcation, and in the remaining 10 patients, by passing between the vessels. There were no statistically significant differences in the operative time, blood loss, or length of hospitalization with respect to the approach used. Three patients were converted to open procedures as a result of bleeding from segmental veins. None required transfusions and there were no postoperative sequelae. In two patients, successful endoscopic repair of segmental vein avulsion from the vena cava was performed using endoscopic loop ligatures. One patient had a secondary procedure to remove a cage that was causingnerve irritation, and one patient reported retrograde ejaculation after a two level fusion. Another patient, in whom a posterior herniation was removed, later presented with a cerebrospinal fluid leak. Most of the operative complications occurred early in the series. Laparoscopic transperitoneal approach to L4-L5 for insertion of threaded fusion cages is feasible. The laparoscopic L4-L5 procedure can be accomplished with few complications, provided a dedicated team of collaborative surgeons with experience in laparoscopic spine techniques is employed. Variations in vascular anatomy did not prevent successful insertion of two threaded fusion cages. |
Outcomes of carotid artery stenting in high-risk patients with carotid artery stenosis: a single neurovascular center retrospective review of 101 consecutive patients.
Carotid artery angioplasty and carotid artery stenting (CAS) offer a viable alternative to carotid endarterectomy for symptomatic and asymptomatic patients; however, the complication rates associated with CAS may be higher than previously documented. We evaluated the safety and efficacy of CAS in high surgical risk patients in a single neurovascular center retrospective review. An institutional review board-approved retrospective review of the clinical variables and treatment outcomes of 101 consecutive patients (109 stents) from July 2001 to March 2007 with carotid stenosis were analyzed. Both symptomatic and asymptomatic stenoses were studied in high surgical risk patients as defined by the SAPPHIRE (Stenting and Angioplasty with Protection in Patients at High-Risk for Endarterectomy) trial. Specifically, those patients with clinically significant cardiac disease (congestive heart failure, abnormal stress test, or need for open-heart surgery), severe pulmonary disease, contralateral carotid occlusion, contralateral laryngeal nerve palsy, recurrent stenosis after carotid endarterectomy, previous radical neck surgery, or radiation therapy to the neck, and an age older than 80. Seventy-four percent of the patients were symptomatic (n = 81), and the mean stenosis in symptomatic patients was 83%. Reasons for stenting included cardiac/pulmonary/medical risk (60%), contralateral internal carotid artery occlusion (8%), recurrent stenosis after carotid endarterectomy (11%), carotid dissection (6%), age older than 80 (7%), previous radical neck surgery (7%), and previous neck radiation (1%). Stent deployment was achieved in 108 of 109 vessels (99%). Distal embolic protection devices were used in 72% of cases treated. The overall rate of in-hospital adverse events (transient ischemic attack, intracranial hemorrhage, minor stroke, major stroke, myocardial infarction, and death) was 8.3% (9 of 109). Of these events, 2 patients (1.8%) experienced a hemispheric transient ischemic attack (neurological symptoms that resolved within 24 hours), 2 others (1.8%) had transiently symptomatic acute reperfusion syndrome. The 30-day stroke/death/myocardial infarction risk was 4.6% (n = 5). Of these patients, 3 had minor strokes (2.7%) defined as a modified Rankin Scale score less than 3 at 1-year follow-up, 1 had a major stroke (0.9%) defined as a modified Rankin Scale score of 3 or more at 1-year follow-up, and 1 patient died after a periprocedural myocardial infarction (0.9%). CAS can be performed with a low 30-day complication rate, even with a higher percentage of symptomatic lesions. The results support the use of CAS in high surgical risk patients with both significant symptomatic and asymptomatic carotid artery disease. |
[Current situation of food consumption and its correlation with climate among rural residents in China, 2000-2012].
To study the differences in food consumption among rural residents in various regions of China, and to analyze the climatic factors that affect the food consumption of rural residents. Based on the consumption data of 13 kinds of food of rural residents including wheat, rice, other grain, fresh vegetables, pork, beef and mutton, poultry, eggs and related products, milk and related products, aquatic products, edible oil, sugar and liquor collected from the China Statistical Yearbook and China's Economic and social data research platform during 2000 to 2012, cluster analysis was conducted to partition the dietary structure and compare the differences in food consumption in each geographical area. Selecting the average temperature, annual temperature difference, daily temperature difference, average air pressure, average daily precipitation, average wind speed, average relative humidity, average sunshine duration, 8 climatic factors as independent variables from the "Dataset of daily surface observations values in individual years( 1981-2010) in China "and "Dataset of annual values of climate data from Chinese surface stations for global exchange " released by China Meteorological Data Service Center to establish a multivariate linear regression model to study the correlation between food consumption and climate. The geographical partition of dietary structure of rural residents in China was as follows: Beijing-Tianjin region, northeast region, upstream and downstream parts of the Yellow River region, southeast coastal area, the part middle and lower reaches of the Yangtze River region, Lingnan area, southwest region, Inner Mongolia, Tibet, Qing-Xin( Qinghai and Xinjiang)region. In the comparison of annual per capita food consumption in various regions: the consumption of eggs and related products( 12. 96 kg) and edible oil( 10. 18 kg) in BeijingTianjin region, vegetable( 128. 20 kg) in northeastern region, aquatic products( 15. 81 kg) and liquor( 19. 04 kg) in the southeastern coastal areas, rice( 189. 36 kg) and poultry( 10. 17 kg) in Lingnan area, pork( 26. 46 kg) in southwest China, other food( 126. 31 kg), milk and related products( 32. 38 kg), beef and mutton( 12. 87 kg) and sugar( 2. 65 kg) in Tibet, and wheat( 184. 63 kg) in Qingxin region was the highest in China. While the consumption of sugar( 0. 79 kg) in northeastern region, other food( 10. 64 kg) in the southeastern coastal areas, wheat( 0. 60 kg) and milk and related products( 0. 33 kg) in Lingnan area, beef and mutton( 0. 43 kg) in southwest China, edible oil( 4. 21 kg) in Inner Mongolia, vegetables( 19. 21 kg), eggs and related products( 0. 60 kg), aquatic products( 0. 01 kg), pork( 2. 23 kg) and poultry( 0. 03 kg) in Tibet, and rice( 13. 00 kg)and liquor( 2. 25 kg) in Qing-Xin regions was the lowest in China. The result of multiple linear regression analysis of climate and food consumption showed that consumption of wheat in staple foods was negatively correlated with average daily precipitation( P < 0. 01, Adj. R~2= 0. 632); and there was a positive correlation between rice consumption and average daily precipitation, and a negative correlation with average temperature and daily temperature difference( P < 0. 01, Adj. R~2= 0. 839). There was a positive correlation between vegetable consumption and annual temperature difference, and negative correlation with average sunshine duration( P < 0. 01, Adj. R~2= 0. 450). The pork consumption was negatively correlated with the average sunshine duration( P < 0. 01, Adj. R~2= 0. 386). The dietary structure of rural residents in China can bedivided into 10 kinds of geographical partitions. Average daily precipitation is negatively and positively correlated with consumption of wheat and rice, respectively. Average sunshine duration has negative impact on vegetable and pork consumption. Average temperature and daily temperature difference are negatively correlated with rice consumption. And annual temperature difference has positive impact on vegetable consumption. |
Mycophenolic acid versus azathioprine as primary immunosuppression for kidney transplant recipients.
Modern immunosuppressive regimens after kidney transplantation usually use a combination of two or three agents of different classes to prevent rejection and maintain graft function. Most frequently, calcineurin-inhibitors (CNI) are combined with corticosteroids and a proliferation-inhibitor, either azathioprine (AZA) or mycophenolic acid (MPA). MPA has largely replaced AZA as a first line agent in primary immunosuppression, as MPA is believed to be of stronger immunosuppressive potency than AZA. However, treatment with MPA is more costly, which calls for a comprehensive assessment of the comparative effects of the two drugs. This review of randomised controlled trials (RCTs) aimed to look at the benefits and harms of MPA versus AZA in primary immunosuppressive regimens after kidney transplantation. Both agents were compared regarding their efficacy for maintaining graft and patient survival, prevention of acute rejection, maintaining graft function, and their safety, including infections, malignancies and other adverse events. Furthermore, we investigated potential effect modifiers, such as transplantation era and the concomitant immunosuppressive regimen in detail. We searched Cochrane Kidney and Transplant's Specialised Register (to 21 September 2015) through contact with the Trials' Search Co-ordinator using search terms relevant to this review. All RCTs about MPA versus AZA in primary immunosuppression after kidney transplantation were included, without restriction on language or publication type. Two authors independently determined study eligibility, assessed risk of bias and extracted data from each study. Statistical analyses were performed using the random-effects model and the results were expressed as risk ratio (RR) for dichotomous outcomes and mean difference (MD) for continuous outcomes with 95% confidence intervals (CI). We included 23 studies (94 reports) that involved 3301 participants. All studies tested mycophenolate mofetil (MMF), an MPA, and 22 studies reported at least one outcome relevant for this review. Assessment of methodological quality indicated that important information on factors used to judge susceptibility for bias was infrequently and inconsistently reported.MMF treatment reduced the risk for graft loss including death (RR 0.82, 95% CI 0.67 to 1.0) and for death-censored graft loss (RR 0.78, 95% CI 0.62 to 0.99, P < 0.05). No statistically significant difference for MMF versus AZA treatment was found for all-cause mortality (16 studies, 2987 participants: RR 0.95, 95% CI 0.70 to 1.29). The risk for any acute rejection (22 studies, 3301 participants: RR 0.65, 95% CI 0.57 to 0.73, P < 0.01), biopsy-proven acute rejection (12 studies, 2696 participants: RR 0.59, 95% CI 0.52 to 0.68) and antibody-treated acute rejection (15 studies, 2914 participants: RR 0.48, 95% CI 0.36 to 0.65, P < 0.01) were reduced in MMF treated patients. Meta-regression analyses suggested that the magnitude of risk reduction of acute rejection may be dependent on the control rate (relative risk reduction (RRR) 0.34, 95% CI 0.10 to 1.09, P = 0.08), AZA dose (RRR 1.01, 95% CI 1.00 to 1.01, P = 0.10) and the use of cyclosporin A micro-emulsion (RRR 1.27, 95% CI 0.98 to 1.65, P = 0.07). Pooled analyses failed to show a significant and meaningful difference between MMF and AZA in kidney function measures.Data on malignancies and infections were sparse, except for cytomegalovirus (CMV) infections. The risk for CMV viraemia/syndrome (13 studies, 2880 participants: RR 1.06, 95% CI 0.85 to 1.32) was not statistically significantly different between MMF and AZA treated patients, whereas the likelihood of tissue-invasive CMV disease was greater with MMF therapy (7 studies, 1510 participants: RR 1.70, 95% CI 1.10 to 2.61). Adverse event profiles varied: gastrointestinal symptoms were more likely in MMF treated patients and thrombocytopenia and elevated liver enzymes were more common in AZA treatment. MMF was superior to AZA for improvement of graft survival and prevention of acute rejection after kidney transplantation. These benefits must be weighed against potential harms such as tissue-invasive CMV disease. However, assessment of the evidence on safety outcomes was limited due to rare events in the observation periods of the studies (e.g. malignancies) and inconsistent reporting and definitions (e.g. infections, adverse events). Thus, balancing benefits and harms of the two drugs remains a major task of the transplant physician to decide which agent the individual patient should be started on. |
Appraisal of work hazards and safety in the industrial estate of Jeddah.
An environmental study for the appraisal of work hazards and safety in Jeddah Industrial Estate (JIE), Saudi Arabia has been conducted. The study is based upon a representative (stratified random) sample of 44 enterprises, including 52 plants and employing 5830 workers.Nearly 2/3 of the workers have heat exposure, orginating from climatic heat and heat dissipated from industrial operations, while exposure to noise is slightly less, and is attributed to noisy operations and machinery and to lack of meticulous maintenance; both exposures are mild in most of the plants and moderate in some. Mild exposures to nonionizing radiations (UV and IR) and to deficient illumination occur in 25% and 19.2% of the plants studied. Respiratory exposure to chemical agents (organic and inorganic dusts, metal fumes, gases and vapours - including asphyxiants, irritants, liver and nervous system offenders and acid and alkali mists) occurs in 75% of the plants, particularly in the medium-size enterprises plants; however, is mainly mild with a few moderate and severe exposures. Skin absorption contributes to absorption of chemical agents in 29% of the plants, and direct skin contact to chemicals (particularly to lubricating oils) occurs in 81% of the plants.Meanwhile, only eight plants, out of the 32 plants where controls for physical hazards are required (51.2%), apply engineering controls, and even in a few of these plants the efficiency of the control measures has been rated 'bad'. A few of them provide personal protective equipment, and even no maintenance to this equipment is provided.The level of safety is better in the large plants than in the small and medium-size plants; the safety score is the best in the recently established plants, while is the worst in the plastic industry, which is relatively old. The appraisal of fire protection is better than that of the safety, due to efficient supervision of the General Directorate of Civil Defense (GDCD). However, most of the safety problems are managerial and are preventable.First aid is present in all enterprises, as required by the Saudi Labor Laws; however, an in-plant medical service is present in 75% of the large enterprises, in 31.6% of the medium-size and in only 17.6% of the small enterprises. Also, satisfactory medical, accidents and absenteeism records exist in only 15.9% of the enterprises; safety supervision exists in 27.3, and safety education exists in 91% of them, while no environmental monitoring is carried out in any enterprise. Sanitation facilities exist in satisfactory numbers in most of the enterprises; however, their maintenance is poor in most of them, due to lack of hygienic supervision. All enterprises dispose of their liquid wastes into the JIE sewerage system without any treatment, while the solid wastes are collected by the city authorities in 56.8% of them; both wastes are anticipated to cause environmental pollution problems. |
Correspondences between afferent innervation patterns and response dynamics in the bullfrog utricle and lagena.
Otoconial afferents in the bullfrog were characterized as gravity or vibratory sensitive by their resting activity and their responses to head tilt and vibration. The responses of gravity afferents to head tilt were tonic, phasic-tonic, or phasic. A few afferents, termed vibratory/gravity afferents, had gravity as well as vibratory sensitivity. Functionally identified otoconial afferents were injected with Lucifer Yellow and subsequently traced to their peripheral arborizations. Morphological maps, previously constructed with the scanning electron microscope, were used to identify microstructural features of the sensory maculae associated with the peripheral arborizations of dye-filled afferents. The utricular and lagenar macula each is composed of a specialized central band surrounded by a peripheral field. The central bands are composed of densely packed medial rows and more sparsely packed lateral rows of hair cells. Hair cells exhibit a variety of surface topographies which correspond with their macular location. The response dynamics of afferents in the utricle and lagena correspond with the macular locations of their peripheral arborizations. Tonic afferents were traced to hair cells in the peripheral field. Phasic-tonic and phasic afferents innervated hair cells in the lateral rows of the central band, the former innervating hair cells at the edges of the central band and the latter innervating hair cells located more medially. Afferents with vibratory sensitivity were traced to hair cells in the medial rows of the lagenar central band. The response dynamics of afferents corresponded with the surface topography of their innervated hair cells. Tonic and phasic-tonic gravity afferents innervated hair cells with stereociliary arrays markedly shorter than their kinocilium (Lewis and Li types B and C) while phasic gravity and vibratory afferents innervated hair cells with stereociliary arrays nearly equal to their kinocilium (Lewis and Li types E and F). Vibratory sensitivity was uniquely associated with hair cells possessing bulbed kinocilium (Lewis and Li type E) while afferents sensitive to both gravity and vibration innervated hair cells from both of the above groups. We argue that afferent response dynamics are determined, at least in part, at the level of the sensory hair bundle and that morphological variations of the kinocilium and the otoconial membrane are dictated by specialization of sensitivity. We propose that morphological variations of the kinocilium reflect variations in its viscoelastic properties and that these properties determine the nature of the mechanical couple between the stereociliary array and the otoconial membrane. |
Event-related brain potentials and working memory function in healthy humans after single-dose and prolonged intranasal administration of adrenocorticotropin 4-10 and desacetyl-alpha-melanocyte stimulating hormone.
Neuropeptides of the adrenocorticotropin/melanocorticotropin (ACTH/MSH) family are most potent modulators of cognitive function. Their neurobehavioral activity is principally encoded in the 4-10 fragment of the ACTH/MSH molecule; in humans, it has been shown to pertain primarily to functions of attentive stimulus/response processing. The aims of this study were (1) to examine the effects of ACTH 4-10 on event-related brain potentials (ERPs) and behavioral indicators of stimulus encoding within the working memory; (2) to compare the effects after a single dose and after prolonged treatment with ACTH 4-10; and (3) to compare the effects of ACTH 4-10 with those of desacetyl-alpha-MSH (i.e., ACTH 1-13 amide), which, like ACTH 4-10, binds to the known brain melanocortin receptors (MC-Rs) but with distinctly higher affinity. Double-blind, placebo-controlled experiments were performed in 60 healthy control subjects. The authors monitored ERPs and reaction times while these subjects performed an auditory vigilance task ("oddball"). Recall was tested on a verbal short-term memory task including different word categories (neutral, rare, food, sex). After a single (1 mg) as well as prolonged intranasal administration (1 mg/day over a period of 6 weeks), ACTH 4-10 enhanced the positive slow wave in ERPs to target stimuli of the vigilance task (p < 0.05), but left classic P3 unaffected. Moreover, single-dose and prolonged administration of ACTH 4-10 increased the rate of false responses during vigilance (p < 0.01). In the short term, ACTH 4-10 also impaired recall of neutral words (p < 0.05). Equimolar doses of desacetyl-alpha-MSH did not influence ERPs, neither after a single dose nor after prolonged treatment. Similar to ACTH 4-10, desacetyl-alpha-MSH increased the error rate during vigilance and acutely impaired the recall of neutral words. The increase in ERP slow-wave positivity, in conjunction with behavioral impairments after treatment with ACTH 4-10, complemented previous results of inferior focusing of attention and a less concise structure of thought after administration of ACTH 4-10. The changes indicated an impairment in differential processing of relevant versus irrelevant contents within the working memory, and, in this regard, might mimic aspects of psychopathologic disturbances of attention and thought processes. Their persistence after prolonged treatment with ACTH 4-10 suggests an activation of mechanisms subserving the consolidation of the peptide's effects. The poor efficacy of desacetyl-alpha-MSH suggests that the known MC-Rs may be irrelevant for mediating cognitive effects of this neuropeptide family. |
[Practice and correlates of partner notification of HIV infection status among 307 HIV-infected individuals of Shanghai].
To investigate the situation of notification and HIV antibody testing of sexual partners of people who lived with HIV, and to analyze the factors which could influence the rate of sexual partner notification of Shanghai. HIV-positive people were recruited from Jiading, Jinan and Xuhui District in Shanghai, all of them were diagnosed with HIV from July 1, 1998 to July 30, 2014, and all of them were ≥ 16 years old, ruled out poor compliance, unwillingness to cooperate, mental disorders, deaf and other factors that could not properly answer questions. Face to face questionnaires were used to collect demographics, HIV related knowledge, testing of HIV, status of sexual partners before they have been diagnosed with HIV, notification of sexual partners. These questionnaires were self-designed. The differences of notification situation and the HIV-positive rate among different sexual partners were compared by chi-square tests. The factors which would influence the rate of sexual partner notification were analyzed by logistic regression, and the OR (95% CI) value was calculated. A total of 307 people living with HIV were surveyed, of these 276 (89.9%) were males and 31 (10.1%) were females. The rates of different sexual partner been notified from spouses, homosexual regular partners, heterosexual regular partners, heterosexual no-regular no-commercial partners, homosexual no-regular no-commercial partners to commercial sexual partners were 68.2% (105/154), 44.7% (119/266), 21.4% (22/103), 5.8% (3/52), 5.5% (43/787), and 0.4% (1/235) (χ(2) = 5.22, P < 0.001). Among these been notified sexual partners 277 of them have had HIV antibody tested, 90 persons was HIV-positive, the rate was 32.5%. Confirmed time (OR: 0.37, 95% CI: 0.16-0.86), whether inform staff allowed the HIV-positive people mobilize their sexual partners have HIV-antibody test (OR: 9.63, 95% CI: 3.77-24.55), whether someone else was present during notification (OR: 5.57, 95% CI: 1.96-15.78) and relationship stability (OR: 28.55, 95% CI: 7.93-102.75; OR: 14.13, 95% CI: 4.87-41.02) were associated with HIV-positive people disclosing their infected status to their sexual partners. The rate of notification to these partners was low, but the HIV antibody positive rate was high among the sexual partners in the three research districts of Shanghai. Shorter confirmed time, inform staff didn't allow the HIV-positive people mobilize their partners have HIV-antibody test, no other was present during people was told they were HIV-positive, and no fixed sexual relationship, all these could make lower rate of sexual partners to be notified. |
Neither dopamine nor dobutamine reverses the depression in mesenteric blood flow caused by positive end-expiratory pressure.
Positive end-expiratory pressure (PEEP) has been shown to cause a depression of mesenteric blood flow (MBF) and redistribution of blood flow away from the mesenteric vascular bed. We sought to determine whether two commonly used vasoactive agents, dopamine, a known mesenteric vasodilator and inotrope, and dobutamine, with its inotropic properties, would correct the MBF depression caused by PEEP. DESIGN, MATERIAL, AND METHODS: Sprague-Dawley rats, 180 to 250 g, were treated with mechanical ventilation and either no PEEP (control group) or increasing levels (0, 10, 15, and 20 cm of H2O pressure) of PEEP (PEEP group). Also, we evaluated PEEP's effect on MBF and cardiac output (CO) under the influence of a continuous infusion of 2.5 or 12.5 microgram/kg/min of dopamine or 2.5 or 12.5 microgram/kg/min of dobutamine. Cardiac output and, using in vivo videomicroscopy, mesenteric A1, A2, and A3 arteriolar intraluminal radii and A1 arteriolar optical Doppler velocities were measured. After 20 cm of H2O pressure PEEP was attained, two boluses of 2 mL of 0.9 normal saline were given. The MBF was calculated from vessel radius and red blood cell velocity. There were no significant changes from baseline in mean arterial pressure or A2 or A3 diameters in any of the groups. Both MBF and CO were unchanged over time in the control group. The MBF was reduced 78% (p < 0.05) and the CO was reduced 31% (p < 0.05) from baseline at 20 cm of H2O pressure PEEP. After 4 mL of normal saline, the MBF was still 53% below baseline (p < 0.05), while the CO had returned to baseline in the PEEP group. Low-dose dopamine partially ameliorated both the decrease in CO and MBF caused by PEEP, but 4 mL of normal saline was required in addition to the low-dose dopamine to return MBF to baseline levels while on 20 cm of H2O pressure PEEP. High-dose dopamine with the addition of 4 mL of normal saline returned CO to baseline on 20 cm of H2O pressure PEEP, but MBF remained approximately 46% below baseline despite fluid boluses. Neither low-dose nor high-dose dobutamine, with or without fluid boluses, had an appreciable positive effect on CO or MBF. It is clear that inotropes are not a replacement for adequate fluid loading to correct the depression in cardiac output and mesenteric blood flow associated with the use of mechanical ventilation and PEEP. Low-dose dopamine may serve as an adjunct to adequate fluid resuscitation to improve MBF. |
Peginterferon-alpha-2a (40 kD): a review of its use in the management of chronic hepatitis C.
Peginterferon-alpha-2a (40 kD) is a new 'pegylated' subcutaneous formulation of interferon-alpha-2a that has been developed to improve on the pharmacokinetic profile and therapeutic efficacy of interferon-alpha-2a. Peginterferon-alpha-2a (40 kD) is produced by the covalent attachment of recombinant interferon-alpha-2a to a branched mobile 40 kD polyethylene glycol moiety, which shields the interferon-alpha-2a molecule from enzymatic degradation, reduces systemic clearance and enables once-weekly administration. Peginterferon-alpha-2a (40 kD) was significantly more effective than interferon-alpha-2a in interferon-alpha therapy-naive adults with chronic hepatitis C in three nonblind, randomised, multicentre trials. Virological responses (intention-to-treat results) were achieved in 44 to 69% of patients with or without cirrhosis after 48 weeks of treatment with peginterferon-alpha-2a (40kD) 180 microg/week; sustained virological responses 24 weeks after the end of treatment occurred in 30 to 39% of patients. Virological responses at the end of treatment and at long-term follow-up were significantly higher than those achieved with interferon-alpha-2a. Peginterferon-alpha-2a (40 kD) was significantly more effective than interferon-alpha in patients with or without cirrhosis infected with HCV genotype 1. Sustained biochemical responses achieved with peginterferon-alpha-2a (40 kD) 180 microg/week ranged from 34 to 45% and were significantly higher than with interferon-alpha-2a. Recipients of peginterferon-alpha-2a (40 kD) also experienced histological improvements; 24 weeks after discontinuation of treatment with peginterferon-alpha-2a (40 kD) 180 microg/week, 54 to 63% of patients had a > or =2-point improvement in histological activity index score. Peginterferon-alpha-2a (40 kD) produced histological responses in patients (with or without cirrhosis) with or without a sustained virological response. Peginterferon-alpha-2a (40 kD) produced better results than interferon-alpha-2a alone or interferon-alpha-2b plus oral ribavirin on various measures of quality of life in patients with chronic hepatitis C. The tolerability profile of peginterferon-alpha-2a (40 kD) is broadly similar to that of interferon-alpha-2a in patients with chronic hepatitis C with or without cirrhosis. Headache, fatigue and myalgia are among the most common adverse events. Peginterferon-alpha-2a (40 kD) administered once weekly produces significantly higher sustained responses, without compromising tolerability, than interferon-alpha-2a administered thrice weekly in noncirrhotic or cirrhotic patients with chronic hepatitis C, including those infected with HCV genotype 1 - a group in whom interferon-alpha treatment has usually been unsuccessful. Peginterferon-alpha-2a (40 kD) is a valuable new treatment option and appears poised to play an important role in the first-line treatment of patients with chronic hepatitis C, including difficult-to-treat patients such as those with compensated cirrhosis and/or those infected with HCV genotype 1. |
Motor units and histochemistry in rat lateral gastrocnemius and soleus muscles: evidence for dissociation of physiological and histochemical properties after reinnervation.
A reexamination of the question of specificity of reinnervation of fast and slow muscle was undertaken using the original "self" nerve supply to the fast lateral gastrocnemius (LG) and slow soleus muscles in the rat hindlimb. This paradigm takes advantage of the unusual situation of a common nerve branch, which supplies both a fast and slow muscle, and of the opportunity to keep the reinnervating nerve in its normal position. In addition it provides a test of the effects of cross-reinnervation among muscles of the same functional group. The properties of soleus and LG muscles and of individual muscle units were characterized in normal rats and in rats 4-14 mo after cutting the lateral gastrocnemius-soleus (LGS) nerve and suture of the proximal stump to the dorsal surface of the LG muscle. Individual muscle units were functionally isolated by stimulation of single motor axons to LG or soleus muscle contained in teased filaments in the L4 and L5 ventral roots. Motor units were classified as fast contracting fatiguable (FF), fast contracting fatigue resistant (FR), and slow (S) on the basis of criteria described in the cat by Burke et al. and applied to rat muscle units by Gillespie et al. Muscle fibers were classified as fast glycolytic (FG), fast oxidative glycolytic (FOG), and slow oxidative (SO) on the basis of histochemical staining for myosin ATPase, nicotinamide-adenine dinucleotide diaphorase (NADH-D), and alpha-glycerophosphate (alpha-GPD). Reinnervated muscles developed less force and weighed less in accordance with having fewer than normal motor units and having lost denervated muscle fibers. Normal LG contained a small proportion of S-type motor units (9%), whereas the majority (80%) of control soleus units were S type. After reinnervation, each muscle contained similar proportions of fast and slow motor units with S-type units constituting 30% of units in both muscles. When compared with the normal motor-unit sample, there was no significant change in average twitch and tetanic force in reinnervated muscles for each type of motor unit. However, the range within each type was greater, and there was considerable overlap between types. Twitch contraction time was inversely correlated with force in normal and reinnervated muscles as shown previously in self- and cross-reinnervated LGS in the cat. Changes in proportions of motor units in reinnervated LG were accompanied by corresponding changes in histochemical muscle types. This contrasted with reinnervated soleus in which the proportion of muscle fiber types was not significantly changed from normal despite significant change in motor-unit proportions.(ABSTRACT TRUNCATED AT 400 WORDS) |
On the influence of informational content and key-response effect mapping on implicit learning and error monitoring in the serial reaction time (SRT) task.
The present experiment was designed to enhance our understanding of how response effects with varying amounts of useful information influence implicit sequence learning. We recorded event-related brain potentials, while participants performed a modified version of the serial reaction time task (SRTT). In this task, participants have to press one of four keys corresponding to four letters on a computer screen. Unknown to participants, in some parts of the experimental blocks, the stimuli appear in a repetitive (structured) deterministic sequence, whereas in other parts, stimuli were determined randomly. Four groups of participants differing in the presentation of tones after each response performed the SRTT. In the no tone group, no tones were presented after a response. The other three groups differed with respect to the melody generated by the key presses: in the unmelodic group, one out of four different tones was chosen randomly and presented immediately after a response. In the consistent melody group, the press of a response key always resulted in the production of the same tone, resulting in a repetitive melody during structured parts of the sequence (consistent redundant effect). In the inconsistent melody group, the "melody" produced in the sequenced parts of the blocks was identical to the consistent melody group, but the same response could produce two different tones depending on the actual position in the stimulus sequence. Thus, during structured sequences, subjects heard the same melody as in the consistent melody group, but every key press could be followed by one out of two different tones. To disentangle effects of sequence awareness from our experimental manipulations, all analyses were restricted to implicit learners. All four groups showed sequence learning, but to a different degree: in general, every kind of tone improved sequence learning relative to the no tone group. However, unmelodic tones were less beneficial for learning than tones forming a melody. Tones mapped consistently to response keys improved learning faster than tones producing the same melody, but not mapped consistently to keys. However, at the end of the learning phase, the two melody groups did not differ in the amount of sequence learning. The error-related negativity (ERN) increased with sequence learning (larger ERN at the end of the experiment for trials following the sequence compared to random trials) and this effect was more pronounced for the groups that showed more learning. These findings indicate that response effects containing useful information foster sequence learning even if the same response can produce different effects. Furthermore, we replicated earlier results showing that the importance of an error with respect to the task at hand modulates the activity of the human performance monitoring system. |
The Valence-Detrapping Phase Transition in a Crystal of the Mixed-Valence Trinuclear Iron Cyanoacetate Complex [Fe(3)O(O(2)CCH(2)CN)(6)(H(2)O)(3)].
A mixed-valence trinuclear iron cyanoacetate complex, [Fe(3)O(O(2)CCH(2)CN)(6)(H(2)O)(3)] (1), was prepared, and the nature of the electron-detrapping phase transition was studied by a multitemperature single-crystal X-ray structure determination (296, 135, and 100 K) and calorimetry by comparison with an isostructural mixed-metal complex, [CoFe(2)O(O(2)CCH(2)CN)(6)(H(2)O)(3)] (2). The mixed-valence states at various temperatures were also determined by (57)Fe Mössbauer spectroscopy. The Mössbauer spectrum of 1 showed a valence-detrapped state at room temperature. With decreasing temperature the spectrum was abruptly transformed into a valence-trapped state around 129 K, well corresponding to the heat-capacity anomaly due to the phase transition (T(trs) = 128.2 K) observed in the calorimetry. The single-crystal X-ray structure determination revealed that 1 has an equilateral structure at 296 and 135 K, and that the structure changes into an isosceles one at 100 K due to the electron trapping. The crystal system of 1 at 296 K is rhombohedral, space group R&thremacr; with Z = 6 and a = 20.026(1) Å, c = 12.292(2) Å; at 135 K, a = 19.965(3) Å, c = 12.145(4) Å; and at 100 K, the crystal system changes into triclinic system, space group P&onemacr;, with Z = 2 and a = 12.094(2) Å, b = 12.182(3) Å, c = 12.208(3) Å, alpha = 110.04(2) degrees, beta = 108.71(2) degrees, gamma = 109.59(2) degrees. The X-ray structure determination at 100 K suggests that the electronically trapped phase of 1 at low temperature is an antiferroelectrically ordered phase, because the distorted Fe(3)O molecules, which are expected to possess a nonzero electronic dipole moment, oriented alternatively in the opposite direction with respect to the center of symmetry. On the other hand, no heat-capacity anomaly was observed in 2 between 7 and 300 K, and X-ray structure determination indicated that 2 shows no structure change when the temperature is decreased from 296 K down to 102 K. The crystal system of 2 at 296 K is rhombohedral, space group R&thremacr; with Z = 6 and a = 19.999(1) Å, c = 12.259(1) Å; at 102 K, a = 19.915(2) Å, c = 12.061(1) Å. Even at 102 K the CoFe(2)O complex still has a C(3) axis, and the three metal ion sites are crystallographically equivalent because of a static positional disorder of two Fe(III) ions and one Co(II) ion. The activation energy of intramolecular electron transfer of 1 in the high-temperature disordered phase was estimated to be 3.99 kJ mol(-)(1) from the temperature dependence of the Mössbauer spectra with the aid of the spectral simulation including the relaxation effect of intramolecular electron transfer. Finally the phase-transition mechanism of 1 was discussed in connection with the intermolecular dielectric interaction. |
Electron microscopical observations on the ciliate Furgasonia blochmanni Fauré-Fremiet, 1967: Part II: Morphogenesis and phylogenetic conclusions.
The ultrastructural events during cortical morphogenesis of Furgasonia blochmanni are studied by TEM. The kinetosomal proliferation in the somatic cortex at the beginning of morphogenesis produces kinetosomal triads. All kinetosomes of these triads have the same fibrillar systems as somatic monokinetids. Such somatic triads are also involved in the formation of the adoral membranelies of the opisthe. In the mature adoral membranelies the postciliary microtubules of the anterior and the middle kinetosomes of these triads as well as all kinetodesmal fibers are replaced by desmoses. Only the opisthe gets new adoral membranelles, since the parental adoral membranelles persist in the proter; however, the new paroral membrane of both the proter and the opisthe are newly formed as derivatives of the old paroral membrane. At the beginning of stomatogenesis the old paroral membrane divides into 2 parts of unequal length. The anterior part, which stays in the proter, splits longitudinally forming a new kinety 1' and the anlage of the new paroral membrane. In the adult cell the anterior kinetosomes of the paroral dyads (the right kinetosomes of the paroral membrane) are already orientated like somatic kinetosomes. Therefore, no rotation is necessary when these kinetosomes become part of the somatic monokinetids of kinety 1'. The posterior kinetosomes of the dyads of the anterior part of the paroral membrane (the anlage of a new paroral membrane of the proter) remain orientated perpendicularly to the longitudinal axis of the cell, an orientation which is necessary when these kinetosomes send postciliary microtubules towards the forming cytopharyngeal basket. All kinetosomes of the posterior part of the former paroral membrane also become orientated such that triplet 9 points to the left towards the presumptive oral opening of the opisthe. During stomatogenesis both kinetosomes of the new paroral membranes are rotated by 90° compared to the longitudinal axis of the cell and send postciliary microtubules towards the forming cytopharynx. In contrast to the adult cell, during stomatogenesis only the posterior kinetosomes of the paroral dyads are ciliated while the newly formed anterior kinetosomes are barren. At the end of stomatogenesis the cilia of the posterior kinetosomes are resorbed and new cilia grow at the anterior kinetosomes. During stomatogenesis all kinetosomes of the anlagen of the new paroral membranes possess postciliary microtubules, kinetodesmal fibers and in some cases transverse microtubules as well. Following stomatogenesis, the kinetodesmal fibers and transverse microtubules are resorbed, and the orientation of the anterior kinetosomes reverts from perpendicular to parallel to the paroral membrane axis. These data from F. blochmanni are compared with the ultrastructural data on morphogenesis from Paraurostyla, Tetrahymena and Coleps. Finally the phylogenetically significant characters obtained from studies on morphology and morphogenesis in F. blochmanni and other nassulid ciliates are discussed, and a "scheme of argumentation of phylogenetic systematics" is presented for the nassulids. It is concluded that F. blochmanni is correctly classified within the nassulid suborder Nassulina and that the Nassulida including F. blochmanni certainly are a monophyletic group within the subphylum Cyrtophora Small, 1976. |
Evaluation of an ingestible telemetric temperature sensor for deep hyperthermia applications.
We have investigated the potential of an ingestible thermometric system (ITS) for use with a deep heating system. The ingestible sensor contains a temperature-sensitive quartz crystal oscillator. The telemetered signal is inductively coupled by a radiofrequency coil system to an external receiver. The sensors, covered with a protective silicon coating, are 10 mm in diameter and 20 mm long and are energized by an internal silver-oxide battery. Experimental studies were carried out to investigate the accuracy of the system and the extent of reliable operation of these sensors in an electromagnetic environment. Different measurements were repeated for five sensors. Calibration accuracy was verified by comparison with a Bowman probe in the temperature range 30 degrees C to 55 degrees C. Linear regression analysis of individual pill readings indicated a correlation within +/- 0.4 degrees C at 95% prediction intervals in the clinical temperature range of 35 degrees C to 50 degrees C. Further work is required to improve this accuracy to meet the quality assurance guidelines of +/- 0.2 degrees C suggested by the Hyperthermia Physics Center. Response times were determined by the exponential fit of heat-up and cool-down curves for each pill. All curves had correlation coefficients greater than 0.98. Time (mean +/- SE) to achieve 90% response during heat-up was 115 +/- sec. Time to cool-down to 10% of initial temperature was 114 +/- 4 sec. The effect of the external antenna and sensor spacing and the angle of orientation of the sensor relative to the antenna plane were also studied. Electromagnetic interference effects were studied by placing the sensor with a Bowman probe in a cylindrical saline phantom for the tests in an annular phase array applicator. Different power levels at three frequencies--80, 100, and 120 MHz--were used. Accurate temperature readings could not be obtained when the electromagnetic power was on because of interference effects with the receiver. However, the temperatures read with the ITS immediately after the electromagnetic power was switched off correlated well with the Bowman probe readings across the power categories and the three frequencies used. The phantom was heated to steady state, with a Bowman probe placed at the central axis of the cylinder used as control. During the heat-up period and the steady state, the mean difference (+/- SE) between the ITS and Bowman probe was 0.12 degrees C (+/- 0.05 degrees C).(ABSTRACT TRUNCATED AT 400 WORDS) |
The effect of haemonchosis and blood loss into the abomasum on digestion in sheep.
1. An experiment was conducted to determine the effect of the abomasal parasite, Haemonchus contortus, on the pattern of digestion and nutrient utilization in Merino sheep. There were three groups of sheep: infected with H. contortus (300 larvae/kg live weight) (n 5), sham-infected by transferring blood from the jugular vein to the abomasum, and uninfected (control) sheep (n 9) which were fed daily rations equal to amounts consumed by 'paired' animals in the two other treatment groups. A diet containing (g/kg): lucerne (medicago sativa) chaff 490, oat chaff 480, ground limestone 10, urea 10, and sodium chloride 10, was given in equal amounts at 3-h intervals. 2. Continuous intrarumen infusions (8 d) of chromium and ytterbium were made in order to measure the flow of digesta through the rumen, duodenum and ileum with 15NH4Cl included in the infusate for the final 3 d. The loss of blood into the gastrointestinal tract was measured using 51Cr-labelled erythrocytes and the rate of irreversible loss of plasma urea was measured with reference to a single intravenous injection of [14C]urea. Samples of rumen fluid were taken for analysis of volatile fatty acid (VFA) concentrations. 3. The infected and sham-infected sheep developed severe anaemia during the period over which digestion and metabolism measurements were made (packed cell volume 0.118 (SE 0.0042) and 0.146 (SE 0.0073) respectively). The corresponding rates of blood loss into the gastrointestinal tracts were 253 (SE 23) and 145 (SE 17) ml/d. 4. The proportions of VFA in rumen fluid were altered (P less than 0.05) in the infected group with a decrease in the ratio, acetate: propionate (control 3.28, infected 2.58, standard error of difference (SED) 0.21). There was also an increase in rumen fluid outflow rate (P less than 0.01) from 4.05 litres/d in the control group to 5.53 litres/d in the infected group (SED 0.43). Water intake was higher (P less than 0.05) in the infected than in the control animals (2.25 and 5. There was a decrease (P less than 0.05) in apparent digestion of organic 5. There was a decrease (P less than 0.05) in apparent digestion of organic matter in the forestomachs of infected sheep (0.32 compared with 0.39 in the control, SED 0.02). There was also a decrease (P less than 0.05) in the apparent digestion of organic matter across the whole digestive tract (0.65 control, 0.61 infected, SED 0.013).(ABSTRACT TRUNCATED AT 400 WORDS) |
Physical Therapists Forward Deployed on Aircraft Carriers: A Retrospective Look at a Decade of Service.
Navy physical therapists (PTs) have been a part of ship's company aboard Aircraft Carriers since 2002 due to musculoskeletal injuries being the number one cause of lost duty time and disability. This article describes a decade of physical therapy services provided aboard aircraft carriers. A retrospective survey was conducted to evaluate the types of services provided, volume of workload, value of services provided, and impact of PTs on operational readiness for personnel aboard Naval aircraft carriers. Thirty-four reports documenting workload from PTs stationed onboard aircraft carriers were collected during the first decade of permanent PT assignment to aircraft carriers. This report quantifies a 10-yr period of physical therapy services (PT and PT Technician) in providing musculoskeletal care within the carrier strike group and adds to existing literature demonstrating a high demand for musculoskeletal care in operational platforms. A collective total of 144,211 encounters were reported during the 10-yr period. The number of initial evaluations performed by the PT averaged 1,448 per assigned tour. The average number of follow-up appointments performed by the PT per tour was 1,440. The average number of treatment appointments per tour provided by the PT and PT technician combined was 1,888. The average number of visits per patient, including the initial evaluation, was 3.3. Sixty-five percent (65%) of the workload occurred while deployed or out to sea during training periods. It was estimated that 213 medical evacuations were averted over the 10-yr period. There were no reports of adverse events or quality of care reviews related to the care provided by the PT and/or PT technician. Access to early PT intervention aboard aircraft carriers was associated with a better utilization ratio (lower average number of visits per condition) than has been reported in prior studies and suggests an effective utilization of medical personnel resources. The impact of Navy PTs serving afloat highlights the importance of sustaining these billets and indicates the potential benefit of additional billet establishment to support operational platforms with high volumes of musculoskeletal injury. Access to early PT intervention can prevent and rehabilitate injuries among operational forces, promote human performance optimization, increase readiness during war and peace time efforts, and accelerate rehabilitation from neuromusculoskeletal injuries. With the establishment of Electronic Health Records within all carrier medical groups a repeat study may provide additional detail related to musculoskeletal injuries to guide medical planners to staff sea-based operational platforms most effectively to care for the greatest source of battle and disease non-battle injuries and related disability in the military. |
How Do Common Comorbidities Modify the Association of Frailty With Survival After Elective Noncardiac Surgery? A Population-Based Cohort Study.
Older people with frailty have decreased postoperative survival. Understanding how comorbidities modify the association between frailty and survival could improve risk stratification and guide development of interventions. Therefore, we evaluated whether the concurrent presence of common and high-risk comorbidities (dementia, chronic obstructive pulmonary disease [COPD], coronary artery disease [CAD], diabetes mellitus, heart failure [HF]) in conjunction with frailty might be associated with a larger decrease in postoperative survival after major elective surgery than would be expected based on the presence of the comorbidity and frailty on their own. This cohort study used linked administrative data from Ontario, Canada to identify adults >65 years having elective noncardiac surgery from 2010 to 2015. Frailty was identified using a validated index; comorbidities were identified with validated codes. We evaluated the presence of effect modification (also called interaction) between frailty and each comorbidity on (1) the relative (or multiplicative) scale by assessing whether the risk of mortality when both frailty and the comorbidity were present was different than the product of the risks associated with each condition; and (2) the absolute risk difference (or additive) scale by assessing whether the risk of mortality when both frailty and the comorbidity were present was greater than the sum of the risks associated with each condition. 11,150 (9.7%) people with frailty died versus 7826 (2.8%) without frailty. After adjustment, frailty was associated with decreased survival (adjusted hazard ratio [HR] = 2.42; 95% confidence interval [CI], 2.31-2.54). On the relative (multiplicative) scale, only diabetes mellitus demonstrated significant effect modification (P value for interaction .03; reduced risk together). On the absolute risk difference (additive) scale, all comorbidities except for coronary disease demonstrated effect modification of the association of frailty with survival. Co-occurrence of dementia with frailty carried the greatest excess risk (Synergy Index [S; the excess risk from exposure to both risk factors compared to the sum of the risks from each factor in isolation] = 2.29; 95% CI, 1.32-10.80, the excess risk from exposure to both risk factors compared to the sum of the risks from each factor in isolation). Common comorbidities modify the association of frailty with postoperative survival; however, this effect was only apparent when analyses accounted for effect modification on the absolute risk difference, as opposed to relative scale. While the relative scale is more commonly used in biomedical research, smaller effects may be easier to detect on the risk difference scale. The concurrent presence of dementia, COPD, and HF with frailty were all associated with excess mortality on the absolute risk difference scale. |
Pediatric collaborative networks for quality improvement and research.
Despite efforts of individual clinicians, pediatric practices, and institutions to remedy continuing deficiencies in pediatric safety and health care quality, multiple gaps and disparities exist. Most pediatric diseases are rare; thus, few practices or centers care for sufficient numbers of children, particularly in subspecialties, to achieve large and representative sample sizes, and substantial between-site variation in care and outcomes persists. Pediatric collaborative improvement networks are multi-site clinical networks that allow practice-based teams to learn from one another, test changes to improve quality, and use their collective experience and data to understand, implement, and spread what works in practice. The model was initially developed in 2002 by an American Board of Pediatrics Workgroup to accelerate the translation of evidence into practice, improve care and outcomes for children, and to serve as the gold standard for the performance in practice component of Maintenance of Certification requirements. Many features of an improvement network derive from the Institute for Healthcare Improvement's collaborative improvement model Breakthrough Series, including focus on a high-impact condition or topic; providing support from clinical content and quality improvement experts; using the Model for Improvement to set aims, use data for feedback, and test changes iteratively; providing infrastructure support for data collection, analysis and reporting, and quality improvement coaching; activities to enhance collaboration; and participation of multidisciplinary teams from multiple sites. In addition, they typically include a population registry of the children receiving care for the improvement topic of interest. These registries provide large and representative study samples with high-quality data that can be used to generate information and evidence, as well as to inform clinical decision making. In addition to quality improvement, networks serve as large-scale health system laboratories, providing the social, scientific, and technical infrastructure and data for multiple types of research. Statewide, regional, and national pediatric collaborative networks have demonstrated improvements in primary care practice as well as care for chronic pediatric diseases (eg, asthma, cystic fibrosis, inflammatory bowel disease, congenital heart disease), perinatal care, and patient safety (eg, central line-associated blood stream infections, adverse medication events, surgical site infections); many have documented improved outcomes. Challenges to spreading the improvement network model exist, including the need for the identification of stable funding sources. However, these barriers can be overcome, allowing the benefits of improved care and outcomes to spread to additional clinical and safety topics and care processes for the nation's children. |
Systemic interventions for recurrent aphthous stomatitis (mouth ulcers).
Recurrent aphthous stomatitis (RAS) is the most frequent form of oral ulceration, characterised by recurrent oral mucosal ulceration in an otherwise healthy individual. At its worst RAS can cause significant difficulties in eating and drinking. Treatment is primarily aimed at pain relief and the promotion of healing to reduce the duration of the disease or reduce the rate of recurrence. A variety of topical and systemic therapies have been utilised. To determine the clinical effect of systemic interventions in the reduction of pain associated with RAS, a reduction in episode duration or frequency. We undertook electronic searches of: Cochrane Oral Health Group and PaPaS Trials Registers (to 6 June 2012); CENTRAL via The Cochrane Library (to Issue 4, 2012); MEDLINE via OVID (1950 to 6 June 2012); EMBASE via OVID (1980 to 6 June 2012); CINAHL via EBSCO (1980 to 6 June 2012); and AMED via PubMed (1950 to 6 June 2012). We searched reference lists from relevant articles and contacted the authors of eligible trials to identify further trials and obtain additional information. We included randomised controlled trials (RCTs) in which the primary outcome measures assess a reduction of pain associated with RAS, a reduction in episode duration or a reduction in episode frequency. Trials were not restricted by outcome alone. We also included RCTs of a cross-over design. Two review authors independently extracted data in duplicate. We contacted trial authors for details of randomisation, blindness and withdrawals. We carried out risk of bias assessment on six domains. We followed The Cochrane Collaboration statistical guidelines and risk ratio (RR) values were to be calculated using fixed-effect models (if two or three trials in each meta-analysis) or random-effects models (if four or more trials in each meta-analysis). A total of 25 trials were included, 22 of which were placebo controlled and eight made head-to-head comparisons (five trials had more than two treatment arms). Twenty-one different interventions were assessed. The interventions were grouped into two categories: immunomodulatory/anti-inflammatory and uncertain. Only one study was assessed as being at low risk of bias. There was insufficient evidence to support or refute the use of any intervention. No single treatment was found to be effective and therefore the results remain inconclusive in regard to the best systemic intervention for RAS. This is likely to reflect the poor methodological rigour of trials, and lack of studies for certain drugs, rather than the true effect of the intervention. It is also recognised that in clinical practice, individual drugs appear to work for individual patients and so the interventions are likely to be complex in nature. In addition, it is acknowledged that systemic interventions are often reserved for those patients who have been unresponsive to topical treatments, and therefore may represent a select group of patients. |
How Cannabis Alters Sexual Experience: A Survey of Men and Women.
Cannabis is reported to enhance sexual function; yet, previous studies have shown that physiological and subjective indices of sexual arousal and motivation were associated with decreased availability of circulating endocannabinoid concentrations. To explain this contradiction, we evaluated which aspects of sexual experience were enhanced or diminished by cannabis use. We used an online questionnaire with a convenience sample of people who had experience with cannabis. We asked questions regarding various aspects of sexual experience and whether they are affected by cannabis. We also asked about sexual dysfunction. Aspects of participant sexual experience enhanced by cannabis. We analyzed results from 216 questionnaires completed by people with experience using cannabis with sex. Of these, 112 (52.3%) said they used cannabis to alter their sexual experience. Eighty-two participants (38.7%) said sex was better, 34 (16.0%) said it was better in some ways and worse in others, 52 (24.5%) said it was sometimes better, and only 10 (4.7%) said it was worse. Of 202 participants, 119 (58.9%) said cannabis increased their desire for sex, 149 of the 202 participants (73.8%) reported increased sexual satisfaction, 144 of 199 participants (74.3%) reported an increased sensitivity to touch, and 132 of 201 participants (65.7%) reported an increased intensity of orgasms. Out of 199 participants, 139 (69.8%) said they could relax more during sex, and 100 of 198 participants (50.5%) said they were better able to focus. Of the 28 participants who reported difficulty reaching orgasm, 14 said it was easier to reach orgasm while using cannabis, but only 10 said that sex was better. The information in this study helps clarify which aspects of sexual function can be improved or interfered with by cannabis use. We asked about specific sexual effects of cannabis and were therefore able to understand the paradox of how cannabis can both improve and detract from sexual experience. Limitations of this study include bias that may have been introduced because the sample included only people who responded to the advertisements; it may not represent the general population of people who use cannabis. Moreover, over one-third of our sample said they use cannabis daily and so represent heavier than average users. Many participants in our study found that cannabis helped them relax, heightened their sensitivity to touch, and increased intensity of feelings, thus enhancing their sexual experience, while others found that cannabis interfered by making them sleepy and less focused or had no effect on their sexual experience. Wiebe E, Just A. How Cannabis Alters Sexual Experience: A Survey of Men and Women. J Sex Med 2019; 16:1758-1762. |
[Posterior urethral valves. Type of treatment and short- and long-term evaluation of renal function].
In this paper the authors have analyzed the management and the outcome of 81 cases of posterior urethral valves which occurred during the period January 1972 - April 1985. 53 children presented very severe urethral valves (grade 4 according to Hendren), 28 mild valve type. All the children of the first group but two had a dilatation of the U.U.T. Vesico-renal reflux, usually severe, was present in 51 ureters; in another 50 ureters a dilatation without reflux was present. 47% of the children of the first group had a renal function within the limits of normality at the moment of presentation and the remaining 53% a reduced renal function. In 9 patients (8 under 50 days of age) a cutaneous vesicostomy according to Blocksom followed, at the age of 10-18 months, by transurethral valve destruction was done. In 6 infants, in the early part of the series, the valve was removed with a hook via the perineal approach. In 38 patients we performed a transurethral valve destruction with the n. 3 Bugbee electrode. 36 out of 53 children (68%) had exclusively a removal of the valvular obstruction. After the removal of the obstruction, 32 out of 37 non refluxing dilated ureters (86.5%) showed a clear improvement. In 7 out of 29 refluxing ureters a nephrectomy was carried out. In the remaining 22 ureters the reflux vanished in 17 and improved in the other 5. 17 children had other types of operations after valvular removal. 23 ureters in 13 patients were reimplanted, with 3 failures (13%). In the 53 children with very severe valve (grade 4) supravesical diversions were not carried out. In the follow-up of 51 children (from 6 month to about 14 years) the renal function was within the limits of normality in 74% (before the operation it was 47%). The best results were obtained in children diagnosed and treated in the first months of life. Cutaneous vesicostomy showed itself to be a very useful method of treatment in very young babies with severe complications. We observed a slight terminal urethral stricture, easily dilatable, in only 1 child. All the children over the age of 12-13 years were continent. In 28 children with mild valves, as well as transurethral valve destruction, an ureteric reimplantation was carried out with success in 5 children (8 ureters) and a vesical diverticulectomy in another 2. |
Developing self-aware mindfulness to manage countertransference in the nurse-client relationship: an evaluation and developmental study.
Nursing students, and even the nurses they become, bring not only caring to their wounded clients but also at times their own unresolved personal stress. Especially without mindful awareness, projection of the nurse's unacknowledged emotional encumbrances (countertransference) threatens the effectiveness of the nurse-client relationship. The purpose of this study was to evaluate whether Scheick's (2004) earlier instruction and use of a self-awareness development guide affected the self-control and sense of vibrant aliveness of students in a psychiatric nursing course, then use the information gained to develop a self-aware mindfulness template for managing countertransference in the nurse-client relationship. As gleaned from the literature, self-concept behaviors such as being fully vibrantly alive in the present moment, exercising self-control, and enacting self-awareness dovetail with qualities of mindfulness. Building on earlier findings pertinent to increasing self-awareness, this research used developmental mixed methodology to evaluate ex post facto data on the sense of vibrant full aliveness and self-control of 15 psychiatric nursing students who used a self-awareness development guide compared with seven final semester nursing students who lacked focused self-awareness instructional experiences. Both groups took Schutz's (1992) Element S: Self concept assessment. The quantitative research portion used inferential statistical analysis via the t test to evaluate student scores yielding information pertinent to the qualitative developmental part of the study. Formative and summative committees comprised of expert nursing and other psychologically oriented faculty then developed self-awareness and mindfully oriented teaching-learning tools that together formed a template highlighted by a mnemonic model termed STEDFAST. Although mitigated by the small sample and the use of intact groups, the experimental group, when exposed to self-awareness focused learning experiences, showed not only statistically significant changes in self-awareness, which had been determined in Scheick's (2004) earlier study, but also statistically significant changes in vibrant aliveness and self-control. The outcome that deliberate learning experiences using a self-awareness development model likely affected all three mindful behaviors prompted the expert formative and summative committees to specifically target self-awareness and mindfulness for the template. Students implementing the template anecdotally report increased self-aware mindfulness especially as they readily use the STEDFAST Self-Aware Mindfulness mnemonic portion in clinical encounters. The limitations of the study mandate that statistical assumptions be downplayed in favor of simply exploring data trends. Nevertheless, students using the template components describe growing in their ability to mindfully self-monitor toward better managing of not only countertransference in psychiatric nursing but also their own learning in other areas of nursing practice. |
Sleep quality and methylation status of core circadian rhythm genes among nurses and midwives.
ABSTARCT Poor sleep quality or sleep restriction is associated with sleepiness and concentration problems. Moreover, chronic sleep restriction may affect metabolism, hormone secretion patterns and inflammatory responses. Limited recent reports suggest a potential link between sleep deprivation and epigenetic effects such as changes in DNA methylation profiles. The aim of the present study was to assess the potential association between poor sleep quality or sleep duration and the levels of 5-methylcytosine in the promoter regions of PER1, PER2, PER3, BMAL1, CLOCK, CRY1 CRY2 and NPAS2 genes, taking into account rotating night work and chronotype as potential confounders or modifiers. A cross-sectional study was conducted on 710 nurses and midwives (347 working on rotating nights and 363 working only during the day) aged 40-60 years. Data from in-person interviews about sleep quality, chronotype and potential confounders were used. Sleep quality and chronotype were assessed using Pittsburgh Sleep Quality Questionnaire (PSQI) and Morningness-Eveningness Questionnaire (MEQ), respectively. Morning blood samples were collected. The methylation status of the circadian rhythm genes was determined via quantitative methylation-specific real-time PCR assays (qMSP) reactions using DNA samples derived from leucocytes. The proportional odds regression model was fitted to quantify the relationship between methylation index (MI) as the dependent variable and sleep quality or sleep duration as the explanatory variable. Analyses were carried out for the total population as well as for subgroups of women stratified by the current system of work (rotating night shift/day work) and chronotype (morning type/intermediate type/evening type). A potential modifying effect of the system of work or the chronotype was examined using the likelihood ratio test. No significant findings were observed in the total study population. Subgroup analyses revealed two statistically significant associations between a shorter sleep duration and 1) methylation level in PER2 among day workers, especially those with the morning chronotype (OR = 2.31, 95%CI:1.24-4.33), and 2) methylation level in CRY2 among subjects with the intermediate chronotype, particularly among day workers (OR = 0.52, 95%CI:0.28-0.96). The study results demonstrated a positive association between average sleep duration of less than 6 hours and the methylation level of PER2 among morning chronotype subjects, and an inverse association for CRY2 among intermediate chronotype subjects, but only among day workers. Both the system of work and the chronotype turned out to be important confounders and modifiers in a number of analyses, making it necessary to consider them as potential covariates in future research on sleep deficiency outcomes. Further studies are warranted to explore this under-investigated topic. |
A derivative-free multistart framework for an automated noncoplanar beam angle optimization in IMRT.
The inverse planning of an intensity-modulated radiation therapy (IMRT) treatment requires decisions regarding the angles used for radiation incidence, even when arcs are used. The possibility of improving the quality of treatment plans by an optimized selection of the beam angle incidences-beam angle optimization (BAO)-is seldom done in clinical practice. The inclusion of noncoplanar beam incidences in an automated optimization routine is even more unusual. However, for some tumor sites, the advantage of considering noncoplanar beam incidences is well known. This paper presents the benefits of using a derivative-free multistart framework for the optimization of the noncoplanar BAO problem. Multistart methods combine a global strategy for sampling the search space with a local strategy for improving the sampled solutions. The proposed global strategy allows a thorough exploration of the continuous search space of the highly nonconvex BAO problem. To avoid local entrapment, a derivative-free method is used as local procedure. Additional advantages of the derivative-free method include the reduced number of function evaluations required to converge and the ability to use multithreaded computing. Twenty nasopharyngeal clinical cases were selected to test the proposed multistart framework. The planning target volumes included the primary tumor, the high and low risk lymph nodes. Organs-at-risk included the spinal cord, brainstem, optical nerves, chiasm, parotids, oral cavity, brain, thyroid, among others. For each case, a setup with seven equispaced beams was chosen and the resulting treatment plan, using a multicriteria optimization framework, was then compared against the coplanar and noncoplanar plans using the optimal beam setups obtained by the derivative-free multistart framework. The optimal noncoplanar beam setup obtained by the derivative-free multistart framework leads to high quality treatment plans with better target coverage and with improved organ sparing compared to treatment plans using equispaced or optimal coplanar beam angle setups. The noncoplanar treatment plans achieved, e.g., an average reduction in the mean dose of the oral cavity of 6.1 Gy and an average reduction in the maximum-dose of the brainstem of 7 Gy when compared to the equispaced treatment plans. The noncoplanar BAO problem is an extremely challenging multimodal optimization problem that can be successfully addressed through a thoughtful exploration of the continuous highly nonconvex BAO search space. The proposed framework is capable of calculating high quality treatment plans and thus can be an interesting alternative toward automated noncoplanar beam selection in IMRT treatment planning which is nowadays the natural trend in treatment planning. |
A case for rimantadine to be marketed in Canada for prophylaxis of influenza A virus infection.
To evaluate the efficacy and safety of amantadine and rimantadine, the first generation antivirals, for the prophylaxis of influenza virus. A systematic search of the English language literature using MEDLINE, EMBASE, Current Contents and the Cochrane database from 1966 to April 2002, as well as a manual search of references from retrieved articles, were performed. Prospective, randomized, controlled clinical trials evaluating amantadine and rimantadine for prophylaxis of naturally occurring influenza A illness were considered. The control arm used either a placebo or an antiviral agent. Each trial was assessed by two authors to determine the adequacy of randomization and description of withdrawals. Efficacy data were extracted according to a predefined protocol. Discrepancies in data extraction among the investigators were solved by consensus. Nine prophylaxis studies of amantadine and rimantadine met the criteria for this systematic review. Seven amantadine versus placebo trials (n=1797), three rimantadine versus placebo trials (n=688) and two amantadine versus rimantadine studies (n=455) were included for the meta-analysis on the prevention of influenza A illness. The summary of results for the relative odds of illness indicated a 64% reduction in the amantadine group compared with placebo (OR 0.36, 95% CI 0.23 to 0.55, P< or =0.001), a 75% reduction in illness for the rimantadine group compared with placebo (OR 0.25, 95% CI 0.07 to 0.97, P=0.05) and no significant differences in the odds of illness for the amantadine versus rimantadine groups (OR 1.15, 95% CI 0.57 to 2.32, P=0.32). The summary of results examining adverse events showed significantly higher odds of central nervous system adverse reactions and premature withdrawal from the clinical trials in the amantadine-treated group than in the placebo-treated group. Compared with the placebo-treated group, the rimantadine-treated group did not have a significantly higher rate of withdrawal or central nervous system events. However, there was a significant increase in the odds of gastrointestinal adverse events for those treated with rimantadine compared with those treated with placebo (OR 3.34, 95% CI 1.17 to 9.55, P=0.03). In the comparative trials of amantadine to rimantadine, rimantadine was associated with an 82% reduction in the odds of central nervous system events (OR 0.18, 95% CI 0.03 to 1.00, P=0.05) and a 60% reduction in the odds of discontinuing treatment (OR 0.40, 95% CI 0.20 to 0.79, P=0.009). This meta-analysis demonstrates that amantadine and rimantadine are superior to placebo in the prevention of influenza A illness. Both antiviral agents have an increased number of adverse events compared with placebo; however, the use of amantadine is associated with significantly higher numbers of central nervous system events and treatment withdrawals compared with rimantadine. Thus, rimantadine should be the preferred agent in this class for the prevention of influenza A virus infection and should be made available in Canada. |
Utility of a 13C-methacetin breath test in evaluating hepatic injury in rats.
Methacetin is thought to be a good substrate for the evaluation of different cytochrome P450 enzymatic systems of liver microsomes because of its rapid metabolism and lack of toxicity in small doses. Recent studies indicate that a methacetin breath test may be a non-invasive alternative for the evaluation of liver function since it correlates well with the severity of liver damage. It may also discriminate between different stages of liver cirrhosis and correlates with the Child-Pugh score. The application of this test in experimental liver damage in animal models has not yet been examined. This study aimed to evaluate the efficacy of the (13)C-methacetin breath test in assessing the extent of hepatic injury in models of acute liver failure, liver cirrhosis, and fatty liver in rats. Absorption of methacetin given per os or intraperitoneally in normal rats was evaluated. The association between liver mass and (13)C-methacetin breath test results was assessed in a 70% hepatectomy rat model. Fulminant hepatic failure was induced by three consecutive intraperitoneal injections of thioacetamide, 300 mg/kg, at 24 h intervals. For induction of liver cirrhosis, rats were given intraperitoneal injections of thioacetamide, 200 mg/kg, twice a week for 12 weeks. A methionine-choline deficient diet was used for the induction of fatty liver. Rats were analyzed for (13)C-methacetin by BreathID (MBID) using molecular correlation spectrometry. BreathID continuously sampled the animal's breath for 60 min and displayed the results on the BreathID screen in real-time. Methacetin was absorbed well irrespective of the administration method in normal rats. Liver mass was associated with peak amplitude, complete percent dose recovery (CPDR) at 30 and 60 min and MBID peak time. A high degree of association was also demonstrated with MBID results in acute hepatitis (peak amplitude, 19.6 +/- 3.4 vs 6.3 +/- 1.63.4; CPDR30, 6.0 +/- 3.3 vs 1.2 +/- 0.5; CPDR60, 13.3 +/- 4.5 vs 3.2 +/- 1.4; and peak time, 31.0 +/- 14.9 vs 46.9 +/- 10.8 min) and liver cirrhosis (peak amplitude, 24.4 +/- 2.3 vs 15.6 +/- 6.4; CPDR30, 7.9 +/- 1.2 vs 2.7 +/- 1.0; CPDR60, 17.8 +/- 2.6 vs 8.8 +/- 2.1; and peak time, 30.2 +/- 1.5 vs 59.6 +/- 14.5 min), but not with grade of liver steatosis. Methacetin is well absorbed and exclusively metabolized in the liver. MBID is a sensitive test and may be a useful tool for the evaluation of functional liver mass in animal models of acute liver failure and cirrhosis. However, MBID could not distinguish between fatty liver and normal liver in rats. |
No evidence for αGal epitope transfer from media containing FCS onto human endothelial cells in culture.
Current clinical applications of cell therapies and tissue engineered (TE) constructs aim to generate non-immunogenic cells in the best-case scenario of autologous origin. As the cells are cultured, it is theoretically possible that immunoreactive molecules present in xenogenic cell culture media components, such as fetal calf serum (FCS), are transmitted in the culturing process. This problem has propelled the search for xeno-free culture media; however, in vitro culturing of many cell types, especially TE constructs which consist of several cell types, still relies to a great extent on FCS. In this study, we investigated the degree to which xenoantigens are transmitted to human endothelial cells (EC) cultured in medium containing FCS. Human EC were isolated from pulmonary artery fragments and atrial appendage tissue samples by enzymatic digestion followed by magnetic-activated cell separation (MACS) utilizing CD31 antibodies. The cells were cultured in EGM-2 medium containing 10% FCS for several passages. Griffonia Simplicifolia Lectin I - Isolectin B4 (GSL I-B4) was used to detect cell surface-bound αGal epitopes either microscopically or flow cytometrically. Antibody binding to cells exposed to human sera prepared from healthy blood donors was investigated to detect surface-located xenoantigens. An antibody-dependent cytotoxicity assay was conducted with heat-inactivated human serum supplemented with rabbit complement and analyzed by flow cytometry after staining for living and dead cells (LIVE/DEAD assay kit). In all experiments, cells cultured in EGM-2 supplemented with 10% human serum (HS) served as controls. Human EC were isolated and cultured successfully for ≥6 passages. GSL I-B4 staining showed no difference between human EC cultured in FCS and in HS. In contrast to porcine EC which showed strong staining with GSL I-B4 and binding of preformed human serum antibodies, human EC cultured in FCS media did not bind human antibodies from high titer anti-αGal and anti-Neu5GC antibody serum. Along these lines, the antibody-dependent cytotoxicity assay showed that human EC cultures independent of FCS or HS usage were not affected, whereas about 40% of porcine EC did not survive. Despite culturing cells in an environment containing xenoantigens, we were unable to demonstrate the translocation of xenogenic epitopes onto the surface of human EC or find an increased sensitivity in preformed human xenoantibody-dependent complement activity. Therefore, our results suggest that the use of human cells for TE or cell therapy grown in cell culture systems complemented with FCS does not necessarily lead to an acute rejection reaction upon implantation. |
Cutaneous Rosai-Dorfman disease: clinicopathological profiles, spectrum and evolution of 21 lesions in six patients.
An uncommon histiocytosis primarily involving the lymph nodes, Rosai-Dorfman disease (RDD, originally called sinus histiocytosis with massive lymphadenopathy) involves extranodal sites in 43% of cases; cutaneous RDD (C-RDD) is a rare form of RDD limited to the skin. The clinicopathological diagnosis of C-RDD may sometimes be difficult, with different clinical profiles from those of its nodal counterpart, and occasionally misleading histological pictures. There have been few multipatient studies of C-RDD and documentation of its histological spectrum is rare. To identify the clinical and histopathological profiles, associated features, and the chronological changes of this rare histiocytosis. From 1991 to 2002, patients diagnosed as having C-RDD were collected in four academic hospitals. Clinical presentations, treatments, and courses of each case were documented. In total, 21 biopsy specimens obtained from these patients were re-evaluated and scored microscopically with attention to the uncommon patterns and chronological evolution both clinically and histologically. We examined six patients with C-RDD, three men and three women. The mean age at the first visit was 43.7 years. The clinical presentations were mostly papules, nodules and plaques, varying with the duration and depth of lesions. Although the anatomical distribution was wide, the face was most commonly involved. Evolutional changes were identified clinically, as the lesions typically began with papules or plaques and grew to form nodules with satellite lesions and resolved with fibrotic plaques before complete remission. No patient had lymphadenopathy or extracutaneous lesions during follow-up (mean 50.5 months). At the end of follow-up, the lesions in four patients had completely resolved irrespective of treatment; two patients had persistent lesions. The histopathological pattern of the main infiltrate, the components of cells and the stromal responses showed dynamic changes according to the duration of lesions. The characteristic Rosai-Dorfman cells (RD cells) were found in association with a nodular or diffuse infiltrate in 15 lesions (71%). Four lesions (19%) demonstrated a patchy/interstitial pattern. One lesion (5%) assumed the pattern of a suppurative granuloma. RD cells were less readily found in these atypical patterns. Conspicuous proliferation of histiocytes associated with RD cells was found in three lesions, including xanthoma, localized Langerhans cell histiocytosis and xanthogranuloma. Along with lymphocytes, plasma cells were present in all lesions, often in large numbers with occasional binucleated or trinucleated cells. Variably found in the lesions were neutrophils (nine lesions, 43%) and eosinophils (13 lesions, 62%). The former occasionally formed microabscesses, while the latter were often few in number. Vascular proliferation was a relatively constant feature (90%). Fibrosis was found in 10 lesions (48%). Our study further confirms that C-RDD is a distinct entity with different age and possibly race distributions from RDD. Compared with its nodal counterpart, C-RDD demonstrates a wider histopathological spectrum with different clinicopathological phases depending on duration of the lesions. Awareness of these features is helpful in making a correct diagnosis. The associations of C-RDD with other histiocytoses may have important implications for the pathogenesis of this rare histiocytosis. |
Amplified proinflammatory cytokine expression and toxicity in mice coexposed to lipopolysaccharide and the trichothecene vomitoxin (deoxynivalenol).
A single oral exposure to the trichothecene vomitoxin (VT) has been previously shown in the mouse to increase splenic mRNA levels for several cytokines in as little as 2 h. Since one underlying mechanism for these effects likely involves superinduction of transiently expressed cytokine genes, VT may also potentially amplify cytokine responses to inflammatory stimuli. To test this possibility, the effects of oral VT exposure on tumor necrosis factor-alpha (TNF-alpha), interleukin-6 (IL-6), and IL-1beta expression were measured in mice that were intraperitoneally injected with lipopolysaccharide (LPS), a prototypic inflammatory agent. As anticipated, VT alone at 1, 5, and 25 mg/kg body weight increased splenic mRNA expression of all three cytokines after 3 h in a dose-response fashion. LPS injection at 1 and 5 mg/kg body weight also induced proinflammatory cytokine mRNA expression. There was a synergistic increase in TNF-alpha splenic mRNA levels in mice treated with both VT and LPS as compared to mice treated with either toxin alone, whereas the effects were additive for IL-6 and IL-1beta mRNA expression. When relative mRNA levels were examined over a 12-h period in mice given LPS (1 mg/kg) and/or VT (5 mg/kg), significant enhancement was observed up to 6, 12, and 3 h for TNF-alpha, IL-6, and IL-1beta, respectively. When plasma cytokine concentrations were measured, TNF-alpha was found to peak at 1 h and was significantly increased at 1, 3, and 6 h if mice were given LPS and VT, whereas LPS or VT alone caused much smaller increases in plasma TNF-alpha Plasma IL-6 peaked at 3 h in LPS, VT, and LPS/VT groups, with the combined toxin group exhibiting additive effects. Plasma IL-1beta was not detectable. The potential for VT and LPS to enhance toxicity was examined in a subsequent study. Mortality was not observed up to 72 h in mice exposed to a single oral dose of VT at 25 mg/kg body weight or to an intraperitoneal dose of LPS at 1 or 5 mg/kg body weight; however, all mice receiving VT and either LPS dose became moribund in less than 40 h. The principal histologic lesions in the moribund mice treated with VT and LPS were marked cell death and loss in thymus, Peyer's patches, spleen, and bone marrow. In all of these lymphoid tissues, treatment-induced cell death had characteristic histologic features of apoptosis causing lymphoid atrophy. These results suggest that LPS exposure may markedly increase the toxicity of trichothecenes and that the immune system was a primary target of these interactive effects. |
Interest of foetal nasal bone measurement at first trimester trisomy 21 screening.
The purpose of this study is to assess the feasibility of foetal nasal bone (NB) measurement during the first trimester of pregnancy, and to examine the contribution of this measurement to the prenatal screening for Down syndrome following the definition of NB threshold using ROC curves in an unselected population. This prospective study was carried out at our centre SIHCUS-CMCO (reference centre) from January 2002 to December 2004 on a total of 2,044 pregnant outpatients at gestational weeks 11-14. Only 1260 singleton foetuses were used for statistical analysis. In the 784 other patients, we were unable to obtain a correct image allowing a reproducible measurement. NB was measured during the same session as nuchal translucency (NT) measurement. Ten trained sonographers took part in the study. Correlation index was evaluated to shed light on a link between interest variables and NB. Screening values of NB measurement in T 21 were also calculated with NB measurement according to crown-rump length, and expressed as the best threshold of multiple of the median determined by ROC curve. Screening values of genetic ultrasound were then evaluated by adding NB measurement to maternal age and NT measurement. Two thousand and forty-four patients were included. We indexed 30 cases of T 21, 14 cases of Trisomy 18, 10 cases of Trisomy 13 and 25 cases of other karyotype abnormalities. Feasibility of measurement was 62% of all cases. We observed a significant relation between NB and NT (p = 0.001 ), as well as between NB and crown-rump-length (p < 0.0001 ). However, size of NB was not correlated to maternal ethnic group (p = 0.314). At 0.6 multiple of the median thresholds, screening values of NB measurement in T 21 were: sensibility 32%, false positive rate 10%, positive predictive value 13.6%, and negative predictive value 96.9%. The likelihood ratio for T 21 in case of NB < or = 0.6 multiple of the median was 4.4 (2.0-9.4). Screening values for maternal age and NT measurement were: sensitivity 88%, false positive rate 23%,positive predictive value 9.7%, and negative predictive value 99.6%. Inclusion of NB measurement increased sensitivity to 100%, positive predictive value to 13.6%, and negative predictive value to 100%, and decreased false positive rate to 5%. NB measurement seemed to be a great sonographic marker for T 21. However, its low feasibility made it inadequate for routine settings in first trimester T 21 screening in an unselected population. Statistical independence with NT thickness needed to be further evaluated. |
Temporal changes of novel transcripts in the chicken retina following imposed defocus.
Changes in retinal gene expression are one of the first steps in the signaling pathway underlying the visual control of eye growth. We tried to identify novel, yet unknown, genes, that alter their expression pattern following imposed defocus, wearing of diffusers, or during recovery from myopia. Sequences found earlier by differential display studies were applied to 5'-RACE and identified as 15 kDa selenoprotein P and prolidase. Moreover, we obtained more sequence information for a yet unidentified gene. We have studied the time course of expressions of these genes following lens or diffuser treatment. Ten to 14 day old white leghorn chickens (4-7) were treated with a monocular +7 D or -7 D lenses for 2, 4, 6, or 24 h, or treated with monocular or binocular diffusers for 2, 4, or 6 h. Chickens of another group were allowed to recover from 4 days of diffuser wear for 4 h. Untreated chicks served as a control for contralateral eye effects. Following the extraction of retinal RNA, the relative expression of the three genes was determined by semi-quantitative real time PCR. We found a significant up regulation of selenoprotein P expression after 24 h of treatment with positive (+380%) or negative lenses (+387%) which was even more prominent in the contralateral untreated eyes (positive: +542%; negative: +786%). A rapid change in selenoprotein mRNA levels was induced by binocular diffuser wear for 2 h (+425%), whereas defocus blur in one eye led to an increase only after 6 h (+261%). There was a significant upregulation of prolidase mRNA after 24 h of treatment with positive (+75%) but not with negative lenses. Moreover, blur induced by diffusers resulted in a highly significant rise of prolidase mRNA levels after 4 h, both with monocular (142%) and binocular (106%) treatment. This is similar to what was found in the previous differential display (DD) screening of monocularly treated eyes. In contrast to the findings of the DD screening, the mRNA expression of the unknown gene remained unchanged both after hyperopic and myopic defocus. Again, blur induced by diffusers evoked the most prominent change after 6 h of binocular treatment. There were no significant alterations in the mRNA levels of the three investigated genes after 4 h of recovery from myopia that was induced by a 4 day period of diffuser treatment. The mRNA expression of selenoprotein P, prolidase, and of the not yet identified gene (sequence 3) is clearly altered by retinal image degradation imposed by diffuser wearing and, in part, by defocus imposed by spectacle lenses. However, none of the candidates are regulated by the sign of imposed defocus, suggesting a role in retinal contrast processing. |
Immune responses in lactating Holstein cows supplemented with Cu, Mn, and Zn as sulfates or methionine hydroxy analogue chelates.
The aim of this study was to compare effects of inorganic sulfate versus chelated forms of supplemental Cu, Mn, and Zn on milk production, plasma and milk mineral concentrations, neutrophil activity, and antibody titer response to a model vaccination. Holstein cows (n=25) were assigned in 2 cohorts based on calving date to a 12-wk randomized complete block design study. The first cohort consisted of 17 cows that had greater days in milk (DIM; mean of 77 DIM at the start of the trial) than the second cohort of 8 cows (32 DIM at the start of the trial). Diets were formulated to supplement 100% of National Research Council requirements of Cu, Mn, and Zn by either inorganic trace minerals (ITM) in sulfate forms or chelated trace minerals (CTM) supplied as metal methionine hydroxy analog chelates, without accounting for trace mineral contribution from other dietary ingredients. Intake and milk production were recorded daily. Milk composition was measured weekly, and milk Cu, Mn, and Zn were determined at wk 0 and 8. Plasma Cu and Zn concentrations and neutrophil activity were measured at wk 0, 4, 8, and 12. Neutrophil activity was measured by in vitro assays of chemotaxis, phagocytosis, and reactive oxygen species production. A rabies vaccination was administered at wk 8, and vaccine titer response at wk 12 was measured by both rapid fluorescent focus inhibition test and ELISA. Analyzed dietary Cu was 21 and 23mg/kg, Mn was 42 and 46mg/kg, and Zn was 73 and 94mg/kg for the ITM and CTM diets, respectively. No effect of treatment was observed on milk production, milk composition, or plasma minerals. Dry matter intake was reduced for CTM compared with ITM cows, but this was largely explained by differences in body weight between treatments. Milk Cu concentration was greater for CTM than ITM cows, but this effect was limited to the earlier DIM cohort of cows and was most pronounced for multiparous compared with primiparous cows. Measures of neutrophil function were unaffected by treatment except for an enhancement in neutrophil phagocytosis with the CTM treatment found for the later DIM cohort of cows only. Rabies antibody titer in CTM cows was 2.8 fold that of ITM cows as measured by ELISA, with a trend for the rapid fluorescent focus inhibition test. Supplementation of Cu, Mn, and Zn as chelated sources may enhance immune response of early lactation dairy cows compared with cows supplemented with inorganic sources. |
Survival in patients with spinocerebellar ataxia types 1, 2, 3, and 6 (EUROSCA): a longitudinal cohort study.
Spinocerebellar ataxias are dominantly inherited progressive ataxia disorders that can lead to premature death. We aimed to study the overall survival of patients with the most common spinocerebellar ataxias (SCA1, SCA2, SCA3, and SCA6) and to identify the strongest contributing predictors that affect survival. In this longitudinal cohort study (EUROSCA), we enrolled men and women, aged 18 years or older, from 17 ataxia referral centres in ten European countries; participants had positive genetic test results for SCA1, SCA2, SCA3, or SCA6 and progressive, otherwise unexplained, ataxias. Survival was defined as the time from enrolment to death for any reason. We used the Cox regression model adjusted for age at baseline to analyse survival. We used prognostic factors with a p value less than 0·05 from a multivariate model to build nomograms and assessed their performance based on discrimination and calibration. The EUROSCA study is registered with ClinicalTrials.gov, number NCT02440763. Between July 1, 2005, and Aug 31, 2006, 525 patients with SCA1 (n=117), SCA2 (n=162), SCA3 (n=139), or SCA6 (n=107) were enrolled and followed up. The 10-year survival rate was 57% (95% CI 47-69) for SCA1, 74% (67-81) for SCA2, 73% (65-82) for SCA3, and 87% (80-94) for SCA6. Factors associated with shorter survival were: dysphagia (hazard ratio 4·52, 95% CI 1·83-11·15) and a higher value for the Scale for the Assessment and Rating of Ataxia (SARA) score (1·26, 1·19-1·33) for patients with SCA1; older age at inclusion (1·04, 1·01-1·08), longer CAG repeat length (1·16, 1·03-1·31), and higher SARA score (1·15, 1·10-1·20) for patients with SCA2; older age at inclusion (1·44, 1·20-1·74), dystonia (2·65, 1·21-5·53), higher SARA score (1·26, 1·17-1·35), and negative interaction between CAG and age at inclusion (0·994, 0·991-0·997) for patients with SCA3; and higher SARA score (1·17, 1·08-1·27) for patients with SCA6. The nomogram-predicted probability of 10-year survival showed good discrimination (c index 0·905 [SD 0·027] for SCA1, 0·822 [0·032] for SCA2, 0·891 [0·021] for SCA3, and 0·825 [0·054] for SCA6). Our study provides quantitative data on the survival of patients with the most common spinocerebellar ataxias, based on a long follow-up period. These results have implications for the design of future interventional studies of spinocerebellar ataxias; for example, the prognostic survival nomogram could be useful for selection and stratification of patients. Our findings need validation in an external population before they can be used to counsel patients and their families. European Union 6th Framework programme, German Ministry of Education and Research, Polish Ministry of Scientific Research and Information Technology, European Union 7th Framework programme, and Fondation pour la Recherche Médicale. |
Utility of Magnetic Resonance Imaging to Monitor Surgical Meshes: Correlating Imaging and Clinical Outcome of Patients Undergoing Inguinal Hernia Repair.
From a surgeon's point of view, meshes implanted for inguinal hernia repair should overlap the defect by 3 cm or more during implantation to avoid hernia recurrence secondary to mesh shrinkage. The use of magnetic resonance imaging (MRI)-visible meshes now offers the opportunity to noninvasively monitor whether a hernia is still covered sufficiently in the living patient. The purpose of this study was therefore to evaluate the efficacy of hernia repair after mesh implantation based on MRI findings (mesh coverage, visibility of hernia structures) and based on the patient's postoperative symptoms. In this prospective study approved by the ethics committee, 13 MRI-visible meshes were implanted in 10 patients (3 bilaterally) for inguinal hernia repair between March 2012 and January 2013. Senior visceral surgeons (>7 years of experience) implanted the meshes via laparoscopic transabdominal preperitoneal procedure. Magnetic resonance imaging was performed within 1 week and at 3 months after surgery at a 1.5-T system. Mesh position, deformation, and coverage of the hernia were visually assessed in consensus and rated on a 4-point semiquantitative scoring system. Distances of hernia center point to the mesh borders (overlap) were measured. Mesh position and hernia coverage postoperatively and at 3 months after implantation were correlated with the respective patients' clinical symptoms. Statistical analysis was performed using the Wilcoxon signed rank test. Two of the 13 meshes presented with an atypical mesh configuration along the course of psoas muscle with a short medial overlap of less than 2 cm. Eleven of the 13 meshes exhibited a typical mesh configuration with lateral folding and initial overlap of more than 2 cm. Between baseline and 3 months' follow-up, average overlap decreased in the medial direction by -10% (3.75 cm vs 3.36 cm, P = 0.22), in the lateral direction by -20% (3.55 cm vs 2.82 cm, P = 0.01), in the superior direction by -2% (5.82 cm vs 5.72 cm, P = 0.55), and in the posterior direction by -19% (4.11 cm vs 3.34 cm, P = 0.01). Between baseline and 3 months' follow-up, mesh folding increased mildly in the medial direction, whereas no change was found in the other directions. Individual folds of the mesh were flexible over time, whereas the gross visual configuration and location of meshes did not change. Four of the 13 former hernia sites were mildly painful at follow-up, whereas 9 of the 13 were completely asymptomatic. No correlation between clinical symptoms and mesh position or hernia coverage was found. Our results suggest that the actual postoperative mesh position after release of laparoscopic pneumoperitoneum may deviate from its position during surgery. Gross mesh position and configuration differed between patients but did not change within a given patient over the observation period of 3 months after surgery. We did not find a correlation between clinical symptoms and mesh configuration or position. Shrinkage of meshes does occur, yet not as concentric process, but regionally variable, leading to a reduced hernia coverage of up to -20% in the lateral and posterior directions. |
Revisiting an old disease? Risk factors for bovine enzootic haematuria in the Kingdom of Bhutan.
Bovine enzootic haematuria (BEH) is a debilitating disease of cattle caused by chronic ingestion of bracken fern. Control of BEH is difficult when bracken fern is abundant and fodder resources are limited. To fill a significant knowledge gap on modifiable risk factors for BEH, we conducted a case-control study to identify cattle management practices associated with BEH in the Bhutanese cattle population. A case-control study involving 16 of the 20 districts of Bhutan was carried out between March 2012 and June 2014. In Bhutan sodium acid phosphate and hexamine (SAP&H) is used to treat BEH-affected cattle. All cattle greater than three years of age and treated with SAP&H in 2011 were identified from treatment records held by animal health offices. Households with at least one SAP&H-treated cattle were defined as probable cases. Probable case households were visited and re-classified as confirmed case households if the BEH status of cattle was confirmed following clinical examination and urinalysis. Two control households were selected from the same village as the case household. Households were eligible to be controls if: (1) householders reported that none of their cattle had shown red urine during the previous five years, and (2) haematuria was absent in a randomly selected animal from the herd following clinical examination. Details of cattle management practices were elicited from case and control householders using a questionnaire. A conditional logistic regression model was used to quantify the association between exposures of interest and household BEH status. A total of 183 cases and 345 controls were eligible for analysis. After adjusting for known confounders, the odds of free-grazing for two and three months in the spring were 3.81 (95% CI 1.27-11.7) and 2.28 (95% CI 1.15-4.53) times greater, respectively, in case households compared to controls. The odds of using fresh fern and dry fern as bedding in the warmer months were 2.05 (95% CI 1.03-4.10) and 2.08 (95% CI 0.88-4.90) times greater, respectively, in cases compared to controls. This study identified two husbandry practices that could be modified to reduce the risk of BEH in Bhutanese cattle. Avoiding the use of bracken fern as bedding is desirable, however, if fern is the only available material, it should be harvested during the colder months of the year. Improving access to alternative fodder crops will reduce the need for householders to rely on free-grazing as the main source of metabolisable energy for cattle during the spring. |
Mesh versus suture repair of umbilical hernia in adults: a randomised, double-blind, controlled, multicentre trial.
Both mesh and suture repair are used for the treatment of umbilical hernias, but for smaller umbilical hernias (diameter 1-4 cm) there is little evidence whether mesh repair would be beneficial. In this study we aimed to investigate whether use of a mesh was better in reducing recurrence compared with suture repair for smaller umbilical hernias. We did a randomised, double-blind, controlled multicentre trial in 12 hospitals (nine in the Netherlands, two in Germany, and one in Italy). Eligible participants were adults aged at least 18 years with a primary umbilical hernia of diameter 1-4 cm, and were randomly assigned (1:1) intraoperatively to either suture repair or mesh repair. In the first 3 years of the inclusion period, blocked randomisation (of non-specified size) was achieved by an envelope randomisation system; after this time computer-generated randomisation was introduced. Patients, investigators, and analysts were masked to the allocated treatment, and participants were stratified by hernia size (1-2 cm and >2-4 cm). At study initiation, all surgeons were invited to training sessions to ensure they used the same standardised techniques for suture repair or mesh repair. Patients underwent physical examinations at 2 weeks, and 3, 12, and 24-30 months after the operation. The primary outcome was the rate of recurrences of the umbilical hernia after 24 months assessed in the modified intention-to-treat population by physical examination and, in case of any doubt, abdominal ultrasound. This trial is registered with ClinicalTrials.gov, number NCT00789230. Between June 21, 2006, and April 16, 2014, we randomly assigned 300 patients, 150 to mesh repair and 150 to suture repair. The median follow-up was 25·1 months (IQR 15·5-33·4). After a maximum follow-up of 30 months, there were fewer recurrences in the mesh group than in the suture group (six [4%] in 146 patients vs 17 [12%] in 138 patients; 2-year actuarial estimates of recurrence 3·6% [95% CI 1·4-9·4] vs 11·4% (6·8-18·9); p=0·01, hazard ratio 0·31, 95% CI 0·12-0·80, corresponding to a number needed to treat of 12·8). The most common postoperative complications were seroma (one [<1%] in the suture group vs five [3%] in the mesh group), haematoma (two [1%] vs three [2%]), and wound infection (one [<1%] vs three [2%]). There were no anaesthetic complications or postoperative deaths. This is the first study showing high level evidence for mesh repair in patients with small hernias of diameter 1-4 cm. Hence we suggest mesh repair should be used for operations on all patients with an umbilical hernia of this size. Department of Surgery, Erasmus University Medical Center, Rotterdam, Netherlands. |
Ovarian hyperandrogenism is associated with insulin resistance to both peripheral carbohydrate and whole-body protein metabolism in postpubertal young females: a metabolic study.
The role of endogenous androgens in enhancing the body's protein anabolic capacity has been controversial. To examine this question we chose to study whole-body protein and glucose kinetics in a group of 21 young, postpubertal females (16.3 +/- 0.6 yr), 8 of whom had clinical and laboratory evidence of ovarian hyperandrogenism (OH) (BMI = 37.8 +/- 1.3 kg/m2). We used L-[1-13C]leucine and [6,6,2H2]glucose tracer infusions before and after suppression of their endogenous androgens with estrogen/progesterone supplementation in the form of Triphasil for 4 weeks. Their baseline data were also compared with those of similar aged girls, 7 obese (OB) (BMI = 36.4 +/- 1.5) and 6 lean (LN) (BMI = 20.9 +/- 0.7) who were normally menstruating and had no evidence of androgen excess. Despite comparable glucose concentrations, both OH and OB groups had significant hyperinsulinemia (OH > OB), both basally and after iv glucose stimulation, as compared to LN controls (basal insulin: OH, 252 +/- 52 pmol/L; OB, 145 +/- 41; LN, 60 +/- 9, P = 0.009 OH vs. LN; peak insulin: OH, 2052 +/- 417; OB, 1109 +/- 127, LN, 480 +/- 120, P = 0.0009 OH vs. LN). The rate of appearance (Ra) of glucose, a measure of glucose production, was greater in the LN controls than in the OH or OB groups (OH, 2.0 +/- 0.1 mg/kg.fat free mass.min; OB, 1.9 +/- 0.1; LN, 3.3 +/- 0.1, P < 0.004 vs. LN). Calculated total rates of whole-body protein breakdown (leucine Ra), oxidation, and protein synthesis (nonoxidative leucine disposal) were substantially higher in the OH and OB groups as compared with LN controls (P < 0.04 vs. LN); however, when data are expressed on a per kilogram of fat free mass basis, the OH group had higher rates of proteolysis than the OB and LN, with indistinguishable rates between the latter two groups. None of the above-mentioned parameters changed after 1 month of administration of Triphasil, despite marked improvement in circulating testosterone and free testosterone concentrations after treatment (testosterone, -50%, P = 0.003; free testosterone, -70%, P = 0.02). We conclude that obesity in young postpubertal females is associated with insulin resistance for both peripheral carbohydrate and protein metabolism, and that patients with the OH syndrome have even greater insulin resistance as compared with simple obesity, regardless of treatment for the androgen excess. Carefully designed studies targeting interventions to improve both the hyperandrogenic and hyperinsulinemic state may prove useful even in the early juvenile stages of this disease. |
Functional evaluation of genetic variants associated with endometriosis near GREB1.
Do DNA variants in the growth regulation by estrogen in breast cancer 1 (GREB1) region regulate endometrial GREB1 expression and increase the risk of developing endometriosis in women? We identified new single nucleotide polymorphisms (SNPs) with strong association with endometriosis at the GREB1 locus although we did not detect altered GREB1 expression in endometriosis patients with defined genotypes. Genome-wide association studies have identified the GREB1 region on chromosome 2p25.1 for increasing endometriosis risk. The differential expression of GREB1 has also been reported by others in association with endometriosis disease phenotype. Fine mapping studies comprehensively evaluated SNPs within the GREB1 region in a large-scale data set (>2500 cases and >4000 controls). Publicly available bioinformatics tools were employed to functionally annotate SNPs showing the strongest association signal with endometriosis risk. Endometrial GREB1 mRNA and protein expression was studied with respect to phases of the menstrual cycle (n = 2-45 per cycle stage) and expression quantitative trait loci (eQTL) analysis for significant SNPs were undertaken for GREB1 [mRNA (n = 94) and protein (n = 44) in endometrium]. Participants in this study are females who provided blood and/or endometrial tissue samples in a hospital setting. The key SNPs were genotyped using Sequenom MassARRAY. The functional roles and regulatory annotations for identified SNPs are predicted by various publicly available bioinformatics tools. Endometrial GREB1 expression work employed qRT-PCR, western blotting and immunohistochemistry studies. Fine mapping results identified a number of SNPs showing stronger association (0.004 < P < 0.032) with endometriosis risk than the original GWAS SNP (rs13394619) (P = 0.034). Some of these SNPs were predicted to have functional roles, for example, interaction with transcription factor motifs. The haplotype (a combination of alleles) formed by the risk alleles from two common SNPs showed significant association (P = 0.026) with endometriosis and epistasis analysis showed no evidence for interaction between the two SNPs, suggesting an additive effect of SNPs on endometriosis risk. In normal human endometrium, GREB1 protein expression was altered depending on the cycle stage (significantly different in late proliferative versus late secretory, P < 0.05) and cell type (glandular epithelium, not stromal cells). However, GREB1 expression in endometriosis cases versus controls and eQTL analyses did not reveal any significant changes. In silico prediction tools are generally based on cell lines different to our tissue and disease of interest. Functional annotations drawn from these analyses should be considered with this limitation in mind. We identified cell-specific and hormone-specific changes in GREB1 protein expression. The lack of a significant difference observed following our GREB1 expression studies may be the result of moderate power on mixed cell populations in the endometrial tissue samples. This study further implicates the GREB1 region on chromosome 2p25.1 and the GREB1 gene with involvement in endometriosis risk. More detailed functional studies are required to determine the role of the novel GREB1 transcripts in endometriosis pathophysiology. Funding for this work was provided by NHMRC Project Grants APP1012245, APP1026033, APP1049472 and APP1046880. There are no competing interests. |
Incidence and Imaging Findings of Costal Cartilage Fractures in Patients with Blunt Chest Trauma: A Retrospective Review of 1461 Consecutive Whole-Body CT Examinations for Trauma.
Purpose To assess the incidence of costal cartilage (CC) fractures in whole-body computed tomographic (CT) examinations for blunt trauma and to evaluate distribution of CC fractures, concomitant injuries, mechanism of injury, accuracy of reporting, and the effect on 30-day mortality. Materials and Methods Institutional review board approval was obtained for this retrospective study. All whole-body CT examinations for blunt trauma over 36 months were reviewed retrospectively and chest trauma CT studies were evaluated by a second reader. Of 1461 patients who underwent a whole-body CT examination, 39% (574 of 1461) had signs of thoracic injuries (men, 74.0% [425 of 574]; mean age, 46.6 years; women, 26.0% [149 of 574]; mean age, 48.9 years). χ2 and odds ratios (ORs) with 95% confidence intervals (CIs) were calculated. Interobserver agreement was calculated by using Cohen kappa values. Results A total of 114 patients (men, 86.8% [99 of 114]; mean age, 48.6 years; women, 13.2% [15 of 114]; mean age, 45.1 years) had 221 CC fractures. The incidence was 7.8% (114 of 1461) in all whole-body CT examinations and 19.9% (114 of 574) in patients with thoracic trauma. Cartilage of rib 7 (21.3%, 47 of 221) was most commonly injured. Bilateral multiple consecutive rib fractures occurred in 36% (41 of 114) versus 14% (64 of 460) in other patients with chest trauma (OR, 3.48; 95% CI: 2.18, 5.53; P < .0001). Hepatic injuries were more common in patients with chest trauma with CC fractures (13%, 15 of 114) versus patients with chest trauma without CC fractures (4%, 18 of 460) (OR, 3.72; 95% CI: 1.81, 7.64; P = .0001), as well as aortic injuries (n = 4 vs n = 0; P = .0015; OR, unavailable). Kappa value for interobserver agreement in detecting CC fractures was 0.65 (substantial agreement). CC fractures were documented in 39.5% (45 of 114) of primary reports. The 30-day mortality of patients with CC fractures was 7.02% (eight of 114) versus 4.78% (22 of 460) of other patients with chest trauma (OR, 1.50; 95% CI: 0.65, 3.47; P = .3371). Conclusion CC fractures are common in high-energy blunt chest trauma and often occur with multiple consecutive rib fractures. Aortic and hepatic injuries were more common in patients with CC fractures than in patients without CC fractures. © RSNA, 2017. |
Serum bilirubin and early mortality after transjugular intrahepatic portosystemic shunts: results of a multivariate analysis.
To examine the prognostic utility of the serum bilirubin level before transjugular intrahepatic portosystemic shunt (TIPS) creation as an independent predictor of 30-day mortality in patients who underwent TIPS creation for treatment of variceal hemorrhage. Multiple covariates from a cohort of 220 consecutive patients undergoing TIPS creation were analyzed with use of univariate and multivariate logistic regression. These included pre-TIPS total bilirubin levels, modified Child-Pugh class, APACHE II score, intubation status, etiology of liver disease, and acute versus elective shunting. The mean pre-TIPS serum total bilirubin level was 3.2 mg/dL (range, 0.4-40.3 mg/dL). The bilirubin level was <3 mg/dL in 102 patients, > or = 3.0 mg/dL in 58, > or = 4.0 mg/dL in 34, and > or = 5.0 mg/dL in 27. Each 1.0-mg/dL increase in total bilirubin was associated with 40% greater odds of 30-day mortality (odds ratio = 1.4; 95% CI = 1.2-1.7). Using each threshold as its own referent, bilirubin levels at or greater than 3.0, 4.0, and 5.0 mg/dL stratified patients into increased odds of early death by 5.7, 9.7, and 19.2 times, respectively (all P <.001). A pre-TIPS APACHE II score of >18 increased the odds of early death by a factor of 5.6 (95% CI = 2.4-8.7); modified Child-Pugh class C (vs classes A and B combined) alone increased the odds by a factor of 8.1 (95% CI = 3.6-18.1). Only one of 20 patients (5%) with a pre-TIPS bilirubin level >6.0 mg/dL survived more than 30 days after TIPS creation. In acutely bleeding patients (n = 122) undergoing TIPS creation, bilirubin levels > or = 3.0, > or = 4.0, and > or = 5.0 mg/dL stratified patients into odds ratios of 4.4, 7.1, and 9.8, respectively, compared with 7.1, 13.2, and 9.2 for patients undergoing elective TIPS creation. Combining endotracheal intubation (n = 72) and bilirubin strata yielded mortality odds of 8.3, 12.5, and 20.8 compared with odds of 2.3, 4.6, and 11.2 in nonintubated patients. Combining alcoholic cirrhosis (n = 129) with bilirubin levels yielded mortality odds of 8.0, 10.6, and 18.0 compared with other etiologies of liver disease (odds ratios = 2.9, 7.3, and 22.7). An elevated pre-TIPS bilirubin level is a powerful independent predictor of 30-day mortality after TIPS creation with a 40% increased risk of death for each 1-mg/dL increase above 3.0 mg/dL. The predictive value of this criterion is increased in patients who undergo TIPS procedures electively. The magnitude of the effect on mortality is similar to that of APACHE II scores and modified Child-Pugh class but is simpler to ascertain. |
Estimation of salt intake by urinary sodium excretion in a Portuguese adult population and its relationship to arterial stiffness.
Portugal has one of the highest mortality rates from stroke, a high prevalence of hypertension and probably a high salt intake level. To evaluate Portuguese salt intake levels and their relationship to blood pressure and arterial stiffness in a sample of four different adult populations living in northern Portugal. A cross-sectional study evaluating 24-hour urinary excretion of sodium (24 h UNa+), potassium and creatinine, blood pressure (BP), and pulse wave velocity (PWV) as an index of aortic stiffness in adult populations of sustained hypertensives (HT), relatives of patients with previous stroke (Fam), university students (US) and factory workers (FW), in the context of their usual dietary habits. We evaluated a total of 426 subjects, mean age 50 +/- 22 years, 56% female, BMI 27.9+/-5.1, BP 159/92 mmHg, PWV 10.4+/-2.2 m/s, who showed mean 24h UNa+ of 202 +/- 64 mmol/d, corresponding to a daily salt intake of 12.3 g (ranging from 5.2 to 24.8). The four groups were: HT: n = 245, 49 +/- 18 years, 92% of those selected, 69% treated, BP 163/94 mmHg, PWV 11.9 m/s, 24 h UNa+ 212 mmol/d, i.e. 12.4 g/d of salt); Fam: n = 38, 64 +/- 20 years, 57 % of those selected, BP 144/88 mmHg, PWV 10.5 m/s, 24 h UNa+ 194 mmol/d, i.e. 11.1 g/d of salt; US: n = 82, 22 +/- 3 years, 57% of those selected, BP 124/77 mmHg, PWV 8.7 m/s, 24h UNa+ 199 mmol/d, i.e. 11.3 g/d of salt; FW: n = 61, 39 9 years, 47% of those selected, BP 129/79 mmHg, PWV 9.5 m/s, 24 h UNa+ 221 mmol/d, i.e. 12.9 g/d of salt. The ratio of urinary sodium/potassium excretion (1.9 (0.4) was significantly higher in HT than the other three groups. In the 426 subjects, 24h UNa+ correlated significantly (p < 0.01) with systolic BP (r = 0.209) and with PWV (r=0.256) after adjustment for age and BP. Multivariate analysis showed that BP, age and 24h UNa+ correlated independently with PWV taken as a dependent variable. Four different Portuguese populations showed similarly high mean daily salt intake levels, almost double those recommended by the WHO. Overall, high urinary sodium excretion correlated consistently with high BP levels and appeared to be an independent determining factor of arterial stiffness. These findings suggest that Portugal in general has a high salt intake diet, and urgent measures are required to restrict salt consumption in order to prevent and treat hypertensive disease and to reduce overall cardiovascular risk and events. |
Making oxygen with ruthenium complexes.
Mastering the production of solar fuels by artificial photosynthesis would be a considerable feat, either by water splitting into hydrogen and oxygen or reduction of CO(2) to methanol or hydrocarbons: 2H(2)O + 4hnu --> O(2) + 2H(2); 2H(2)O + CO(2) + 8hnu --> 2O(2) + CH(4). It is notable that water oxidation to dioxygen is a key half-reaction in both. In principle, these solar fuel reactions can be coupled to light absorption in molecular assemblies, nanostructured arrays, or photoelectrochemical cells (PECs) by a modular approach. The modular approach uses light absorption, electron transfer in excited states, directed long range electron transfer and proton transfer, both driven by free energy gradients, combined with proton coupled electron transfer (PCET) and single electron activation of multielectron catalysis. Until recently, a lack of molecular catalysts, especially for water oxidation, has limited progress in this area. Analysis of water oxidation mechanism for the "blue" Ru dimer cis,cis-[(bpy)(2)(H(2)O)Ru(III)ORu(III)(OH(2))(bpy)(2)](4+) (bpy is 2,2'-bipyridine) has opened a new, general approach to single site catalysts both in solution and on electrode surfaces. As a catalyst, the blue dimer is limited by competitive side reactions involving anation, but we have shown that its rate of water oxidation can be greatly enhanced by electron transfer mediators such as Ru(bpy)(2)(bpz)(2+) (bpz is 2,2'-bipyrazine) in solution or Ru(4,4'-((HO)(2)P(O)CH(2))(2)bpy)(2)(bpy)(2+) on ITO (ITO/Sn) or FTO (SnO(2)/F) electrodes. In this Account, we describe a general reactivity toward water oxidation in a class of molecules whose properties can be "tuned" systematically by synthetic variations based on mechanistic insight. These molecules catalyze water oxidation driven either electrochemically or by Ce(IV). The first two were in the series Ru(tpy)(bpm)(OH(2))(2+) and Ru(tpy)(bpz)(OH(2))(2+) (bpm is 2,2'- bipyrimidine; tpy is 2,2':6',2''-terpyridine), which undergo hundreds of turnovers without decomposition with Ce(IV) as oxidant. Detailed mechanistic studies and DFT calculations have revealed a stepwise mechanism: initial 2e(-)/2H(+) oxidation, to Ru(IV)=O(2+), 1e(-) oxidation to Ru(V)=(3+), nucleophilic H(2)O attack to give Ru(III)-OOH(2+), further oxidation to Ru(IV)(O(2))(2+), and, finally, oxygen loss, which is in competition with further oxidation of Ru(IV)(O(2))(2+) to Ru(V)(O(2))(3+), which loses O(2) rapidly. An extended family of 10-15 catalysts based on Mebimpy (Mebimpy is 2,6-bis(1-methylbenzimidazol-2-yl)pyridine), tpy, and heterocyclic carbene ligands all appear to share a common mechanism. The osmium complex Os(tpy)(bpy)(OH(2))(2+) also functions as a water oxidation catalyst. Mechanistic experiments have revealed additional pathways for water oxidation one involving Cl(-) catalysis and another, rate enhancement of O-O bond formation by concerted atom-proton transfer (APT). Surface-bound [(4,4'-((HO)(2)P(O)CH(2))(2)bpy)(2)Ru(II)(bpm)Ru(II)(Mebimpy)(OH(2))](4+) and its tpy analog are impressive electrocatalysts for water oxidation, undergoing thousands of turnovers without loss of catalytic activity. These catalysts were designed for use in dye-sensitized solar cell configurations on TiO(2) to provide oxidative equivalents by molecular excitation and excited-state electron injection. Transient absorption measurements on TiO(2)-[(4,4'((HO)(2)P(O)CH(2))(2)bpy)(2)Ru(II)(bpm)Ru(II)(Mebimpy)(OH(2))](4+), (TiO(2)-Ru(II)-Ru(II)OH(2)) and its tpy analog have provided direct insight into the interfacial and intramolecular electron transfer events that occur following excitation. With added hydroquinone in a PEC configuration, APCE (absorbed-photon-to-current-efficiency) values of 4-5% are obtained for dehydrogenation of hydroquinone, H(2)Q + 2hnu --> Q + H(2). In more recent experiments, we are using the same PEC configuration to investigate water splitting. |
Citrus Blight Found in Yucatan, Mexico.
Citrus blight, a serious tree decline problem of unknown cause in humid citrus-growing areas such as Florida and Louisiana, South America, South Africa, and the Caribbean, has never been reported from Mexico. Citrus blight has no reliable visual symptoms, and physical and chemical tests have to be used for diagnosis. We used water injection into the trunk (2), zinc and potassium analysis of the outer trunk wood (3), and an immunological test for specific blight proteins in the leaves (1). Low uptake when water is injected into the trunk, and high zinc and potassium in the wood, compared with healthy trees, are characteristic of citrus blight (3). Water injection tests and wood analysis of four healthy and four declining trees in the Dzan, Yucatan, Mexico, area in August 1995, showed highly significant differences in water uptake (healthy trees 44.3 ml/min, declining trees 1.0 ml/min), little difference in wood zinc (healthy 2.8 μg/g, declining 3.0 μg/g) and 39% more potassium in the wood (healthy 0.147%, declining 0.204%). No leaf protein tests were done at this location. Tests on eight declining and five healthy trees in Seye, Yucatan, Mexico, in June 1996, showed significant (P = 0.01 to 0.05) differences in water uptake (healthy 12.9 ml/min, declining 0.6 ml/min), in wood zinc (healthy 2.0 μg/g, declining 7.0 μg/g) and potassium in the wood (healthy 0.156%, declining 0.251%). Leaf samples from all eight declining trees were positive for blight in a specific protein test (1). The visual symptoms of all declining trees tested were the same as those of blight-affected trees in Florida and Cuba: zinc deficiency symptoms in the leaves, thin foliage, wilt, and sprouting from the trunk and the main branches. The reasons for the lack of earlier reports of citrus blight from Mexico are apparently climatic and rootstock related. Many of Mexico's citrus-producing areas are dry and blight does not occur in dry areas such as the Mediterranean countries and California. Most of Mexico's citrus is grown on sour orange (Citrus aurantium L.) rootstock that is highly resistant to citrus blight, but very susceptible to tristeza virus disease. In response to warnings that tristeza disease might appear in Yucatan, growers planted Valencia orange (C. sinensis (L.) Osbeck) on Cleopatra mandarin (C. reticulata Blanco) rootstock in Dzan and on Volkamer lemon (C. limon (L.) N. L. Burm.) in Seye, both highly susceptible to citrus blight (4). Changes in rootstock to avoid one disease led to problems with another. References: (1) K. S. Derrick et al. Proc. Fla. State Hortic. Soc. 105:26, 1993. (2) R. F. Lee et al. Plant Dis. 68:511, 1984. (3) H. K. Wutscher and C. J. Hardesty. J. Am. Soc. Hortic. Sci. 104:9, 1979. (4) R. H. Young et al. Proc. Fla. State Hortic. Soc. 91:56, 1978. |
[To explore the preventive and therapeutic effects of Xuebijing injection on acute lung injury induced by cardiopulmonary bypass in rats by regulating the expression of microRNA-17-5p and its mechanism].
To investigate the preventive effect of Xuebijing injection on acute lung injury induced by cardiopulmonary bypass (CPB) and the underlying mechanism. (1) In vivo experiment: 30 Sprague-Dawley (SD) rats were randomly divided into sham group, CPB group, Xuebijing pretreatment group (XBJ+CPB group) with 10 rats in each group. CPB model was reproduced in rats; and CPB was not performed in sham group, but only through arteriovenous puncture. In the XBJ+CPB group, 4 mL/kg Xuebijing injection was injected intraperitoneally 2 hours before CPB, sham group and CPB group were injected with equal volume of normal saline at the same time. The blood from femoral artery was analyzed 4 hours after operation, and the oxygenation index (PaO2/FiO2) was calculated. Then the rats were sacrificed to collect bronchoalveolar lavage fluid (BALF), and the lung permeability index (PPI) was calculated. The lung tissues were harvested, and the wet/dry weight ratio (W/D) of lung tissue was measured. The index of quantitative evaluation of alveolar injury (IQA) was measured. The levels of interleukins (IL-1, IL-6) and tumor necrosis factor-α (TNF-α) in lung tissue and BALF were measured by enzyme-linked immunosorbent assay (ELISA). The content of malondialdehyde (MDA) and the activities of myeloperoxidase (MPO) and superoxide dismutase (SOD) in lung tissue were detected by biochemical method. The microRNA-17-5p (miR-17-5p) expression in lung tissue was determined by quantitative reverse transcription-polymerase chain reaction (RT-qPCR). (2) In vitro experiments: type II alveolar epithelial cells (AEC II) were cultured in vitro, and they were randomly divided into control group (the cells were treated by preoperative serum of CPB in patients with ventricular septal defect), CPB group (the cells were treated by serum after CPB in patients), and XBJ+CPB group (Xuebijing injection 10 g/L+serum after CPB in patients). After 12 hours of culture in each group, the expression of miR-17-5p was detected by RT-qPCR. AEC II cells were transfected with miR-17-5p mimic, inhibitor or corresponding control oligonucleotide (negative control), respectively, to observe the effect of miR-17-5p on Xuebijing regulating CPB-induced apoptosis rate and caspase-3 activity. (1) In vivo experiment: compared with the sham group, the PPI, lung W/D ratio, IQA, and IL-1, IL-6, TNF-α in lung tissue and BALF, as well as MDA content and MPO activity in lung tissue were significantly increased, PaO2/FiO2 and SOD activity in lung tissue were significantly decreased. The parameters of the XBJ+CPB group were significantly improved, suggesting that Xuebijing pretreatment could improve CPB-induced ALI in rats. The expression of miR-17-5p in lung tissue of the CPB group was significantly down-regulated as compared with sham group (2-ΔΔCt: 0.48±0.13 vs. 1.00±0.11, P < 0.05); while the expression of miR-17-5p in the XBJ group was significantly up-regulated as compared with the CPB group (2-ΔΔCt: 1.37±0.09 vs. 0.48±0.13, P < 0.05), indicating that the improvement of Xuebijing injection on lung injury after CPB might be related to miR-17-5p. (2) In vitro experiment: the changes in miR-17-5p expression in each group of AEC II cells confirmed in vivo results. After transfection of miR-17-5p mimic, the apoptotic rate and caspase-3 activity of each group were significantly lower than those transfected with negative control, and the decrease was more significant in the XBJ+CPB group [apoptotic rate: (7.37±0.95)% vs. (12.60±1.90)%, caspase-3 (A value): 0.82±0.09 vs. 1.37±0.08, both P < 0.05]. After transfection of miR-17-5p inhibitor, the apoptotic rate and caspase-3 activity of each group were significantly more than those transfected with negative control [in the XBJ+CPB group: apoptotic rate was (16.30±1.86)% vs. (12.60±1.90)%, caspase-3 (A value) was 1.78±0.13 vs. 1.37±0.08, both P < 0.05]. This indicated that the apoptosis of AEC II cells cultured in serum after CPB was significantly reduced by miR-17-5p, and further reduced by the pretreatment with Xuebijing. Xuebiing injection can reduce the inflammatory reaction and oxidative stress of lung tissue in rats with ALI induced by CPB, and improve oxygenation. The mechanism may be related to up-regulation of miR-17-5p expression in AEC II cells and inhibition of apoptosis of AEC II cells. |
Determinants of sound location selectivity in bat inferior colliculus: a combined dichotic and free-field stimulation study.
This study of the neural representation of sound location in the bat Pteronotus parnellii describes how the peripheral and central components of its auditory system shape the horizontal and vertical spatial selectivity of single neurons in the inferior colliculus. Pteronotus extracts spatial information from the echoes of an emitted pulse composed of four constant-frequency harmonics (30, 60, 90, and 120 kHz), each terminated by a downward frequency sweep. To quantify the intensity cues available in the echo, cochlear microphonic response thresholds were used to measure the directional selectivity of the ear and the interaural intensity level disparities (IIDs) created between ears at standardized speaker positions in the bat's frontal sound field, at frequencies in the pulse spectrum. Speaker positions where thresholds were lowest were termed the sensitive area (SA) of the ear. Positions where IID values were greater than 10 dB were termed the difference area (DA). Ear directionality exhibited a pronounced frequency dependence, both in terms of the degree of directional selectivity and the position of the SA. At the 30-kHz harmonic of the pulse, the ear was broadly directional; the SA covered most of the lower half of the ipsilateral field. The ear was highly directional at the 60- and 90-kHz harmonics. Also, the vertical position of the SA changed dramatically between 60 and 90 kHz, from the horizontal midline at 60 kHz to 40 degrees below the midline at 90 kHz. The positions of the DAs also showed a pronounced frequency dependence. The 30-kHz DA was restricted to the extreme lateral part of the frontal sound field. The 60- and 90-kHz DAs were located in the same positions as the equivalent SAs and exhibited the same difference in vertical position. The DAs of the pulse harmonics differ in both their horizontal and vertical positions; the ears thus generate pronounced binaural spectral cues, which provide two-dimensional spatial information. In the inferior colliculus, a combined paradigm of closed-field dichotic stimulation, followed by free-field stimulation, was used to document the frequency tuning and binaural response properties of single neurons and to correlate these properties with the neuron's horizontal and vertical spatial selectivity in the frontal sound field. Where a neuron responded to free-field stimulation at the lowest intensity is termed its SA. A neuron's frequency tuning primarily influenced its degree of spatial selectivity and its sensitivity in the vertical plane, reflecting the directional properties of the external ears at the neuron's best frequency.(ABSTRACT TRUNCATED AT 400 WORDS) |
Development of antifungal ingredients for dairy products: From in vitro screening to pilot scale application.
Biopreservation represents a complementary approach to traditional hurdle technologies for reducing microbial contaminants (pathogens and spoilers) in food. In the dairy industry that is concerned by fungal spoilage, biopreservation can also be an alternative to preservatives currently used (e.g. natamycin, potassium sorbate). The aim of this study was to develop antifungal fermentates derived from two dairy substrates using a sequential approach including an in vitro screening followed by an in situ validation. The in vitro screening of the antifungal activity of fermentates derivating from 430 lactic acid bacteria (LAB) (23 species), 70 propionibacteria (4 species) and 198 fungi (87 species) was performed against four major spoilage fungi (Penicillium commune, Mucor racemosus, Galactomyces geotrichum and Yarrowia lipolytica) using a cheese-mimicking model. The most active fermentates were obtained from Lactobacillus brevis, Lactobacillus buchneri, Lactobacillus casei/paracasei and Lactobacillus plantarum among the tested LAB, Propionibacterium jensenii among propionibacteria, and Mucor lanceolatus among the tested fungi. Then, for the 11 most active fermentates, culture conditions were optimized by varying incubation time and temperature in order to enhance their antifungal activity. Finally, the antifungal activity of 3 fermentates of interest obtained from Lactobacillus rhamnosus CIRM-BIA1952, P. jensenii CIRM-BIA1774 and M. lanceolatus UBOCC-A-109193 were evaluated in real dairy products (sour cream and semi-hard cheese) at a pilot-scale using challenge and durability tests. In parallel, the impact of these ingredients on organoleptic properties of the obtained products was also assessed. In semi-hard cheese, application of the selected fermentates on the cheese surface delayed the growth of spoilage molds for up to 21 days, without any effect on organoleptic properties, P. jensenii CIRM-BIA1774 fermentate being the most active. In sour cream, incorporation of the latter fermentate at 2 or 5% yielded a high antifungal activity but was detrimental to the product organoleptic properties. Determination of the concentration limit, compatible with product acceptability, showed that incorporation of this fermentate at 0.4% prevented growth of fungal contaminants in durability tests but had a more limited effect against M. racemosus and P. commune in challenge tests. To our knowledge, this is the first time that the workflow followed in this study, from in vitro screening using dairy matrix to scale-up in cheese and sour cream, is applied for production of natural ingredients relying on a large microbial diversity in terms of species and strains. This approach allowed obtaining several antifungal fermentates which are promising candidates for dairy products biopreservation. |
Why and how does native topology dictate the folding speed of a protein?
Since the pioneering work of Plaxco, Simons, and Baker, it is now well known that the rates of protein folding strongly correlate with the average sequence separation (absolute contact order (ACO)) of native contacts. In spite of multitude of papers, our understanding to the basis of the relation between folding speed and ACO is still lacking. We model the transition state as a gaussian polymer chain decorated with weak springs between native contacts while the unfolded state is modeled as a gaussian chain only. Using these hamiltonians, our perturbative calculation explicitly shows folding speed and ACO are linearly related when only the first order term in the series is considered. However, to the second order, we notice the existence of two new topological metrics, termed COC(1) and COC(2) (COC stands for contact order correction). These additional correction terms are needed to properly account for the entropy loss due to overlapping (nested or linked) loops that are not well described by simple addition of entropies in ACO. COC(1) and COC(2) are related to fluctuations and correlations among different sequence separations. The new metric combining ACO, COC(1), and COC(2) improves folding speed dependence on native topology when applied to three different databases: (i) two-state proteins with only α∕β and β proteins, (ii) two-state proteins (α∕β, β and purely helical proteins all combined), and (iii) master set (multi-state and two-state) folding proteins. Furthermore, the first principle calculation provides us direct physical insights to the meaning of the fit parameters. The coefficient of ACO, for example, is related to the average strength of the contacts, while the constant term is related to the protein folding speed limit. With the new scaling law, our estimate of the folding speed limit is in close agreement with the widely accepted value of 1 μs observed in proteins and RNA. Analyzing an exhaustive set (7367) of monomeric proteins from protein data bank, we find our new topology based metric (combining ACO, COC(1), and COC(2)) scales as N(0.54), N being the number of amino acids in a protein. This is in remarkable agreement with a previous argument based on random systems that predict protein folding speed depends on exp (-N(0.5)). The first principle calculation presented here provides deeper insights to the role of topology in protein folding and unifies many parallel arguments, seemingly disconnected, demonstrating the existence of universal mechanism in protein folding kinetics that can be understood from simple polymer physics based principles. |
Tabebuia aurea decreases hyperalgesia and neuronal injury induced by snake venom.
Tabebuia aurea (Silva Manso) Benth. & Hook. f. ex S. Moore is used as anti-inflammatory, analgesic and antiophidic in traditional medicine, though its pharmacological proprieties are still underexplored. In the bothropic envenoming, pain is a key symptom drove by an intense local inflammatory and neurotoxic event. The antivenom serum therapy is still the main treatment despite its poor local effects against pain and tissue injury. Furthermore, it is limited to ambulatorial niches, giving space for the search of new and more inclusive pharmacological approaches. evaluation of Tabebuia aurea hydroethanolic extract (HEETa) in hyperalgesia and neuronal injury induced by Bothrops mattogrossensis venom (VBm). Stem barks from Tabebuia aurea were extracted with ethanol and water (7:3, v/v) to yield the extract HEETa. Then, HEETa was analyzed by LC-DAD-MS and its constituents were identified. Snake venoms were extracted from adult specimens of Bothrops mattogrossensis, lyophilized and kept at -20 °C until use. Male Swiss mice, weighting 20-25 g, were used to hyperalgesia (electronic von Frey), motor impairment (Rotarod test) and tissue injury evaluation (histopatology and ATF-3 immunohistochemistry). Therefore, three experimental groups were formed: VBm (1 pg, 1 ng, 0.3 μg, 1 μg, 3 and 6 μg/paw), HEETa orally (180, 540, 720, 810 or 1080 mg/kg; 10 mL/kg, 30 min prior VBm inoculation) and VBm neutralized (VBm: HEETa, 1:100 parts, respectively). In all set of experiments a control (saline group) was used. First, we made a dose-time-response course curve of VBm's induced hyperalgesia. Next, VBm maximum hyperalgesic dose was employed to perform HEETa orally dose-time-response course curve and analyses of VBm neutralized. Paw tissues for histopathology and DRGs were collected from animals inoculated with VBm maximum dose and treated with HEETa antihyperalgesic effective dose or neutralized VBm. Paws were extract two or 72 h after VBm inoculation and DRGs, in the maximum expected time expression of ATF-3 (72 h). From HEETa extract, glycosylated iridoids were identified, such as catalpol, minecoside, verminoside and specioside. VBm induced a time and dose dependent hyperalgesia with its highest effect seen with 3 µg/paw, 2 h after venom inoculation. HEETa effective dose (720 mg/kg) decreased significantly VBm induced hyperalgesia (3 µg/paw) with no motor impairment and signs of acute toxicity. HEETa antihyperalgesic action starts 1.5 h after VBm inoculation and lasted up until 2 h after VBm. Hyperalgesia wasn't reduced by VBm: HEETa neutralization. Histopathology revealed a large hemorragic field 2 h after VBm inoculation and an intense inflammatory infiltrate of polymorphonuclear cells at 72 h. Both HEETa orally and VBm: HEETa groups had a reduced inflammation at 72 h after VBm. Also, the venom significantly induced ATF-3 expression (35.37 ± 3.25%) compared with saline group (4.18 ± 0.68%) which was reduced in HEETa orally (25.87 ± 2.57%) and VBm: HEETa (19.84 ± 2.15%) groups. HEETa reduced the hyperalgesia and neuronal injury induced by VBm. These effects could be related to iridoid glycosides detected in HEETa and their intrinsic reported mechanism. |
Postinjury Exercise and Platelet-Rich Plasma Therapies Improve Skeletal Muscle Healing in Rats But Are Not Synergistic When Combined.
Skeletal muscle injuries are the most common sports-related injury and a major concern in sports medicine. The effect of platelet-rich plasma (PRP) injections on muscle healing is still poorly understood, and current data are inconclusive. To evaluate the effects of an ultrasound-guided intramuscular PRP injection, administered 24 hours after injury, and/or posttraumatic daily exercise training for 2 weeks on skeletal muscle healing in a recently established rat model of skeletal muscle injury that highly mimics the muscle trauma seen in human athletes. Controlled laboratory study. A total of 40 rats were assigned to 5 groups. Injured rats (medial gastrocnemius injury) received a single PRP injection (PRP group), daily exercise training (Exer group), or a combination of a single PRP injection and daily exercise training (PRP-Exer group). Untreated and intramuscular saline-injected animals were used as controls. Muscle force was determined 2 weeks after muscle injury, and muscles were harvested and evaluated by means of histological assessment and immunofluorescence microscopy. Both PRP (exhibiting 4.8-fold higher platelet concentration than whole blood) and exercise training improved muscle strength (maximum tetanus force, TetF) in approximately 18%, 20%, and 30% of rats in the PRP, PRP-Exer, and Exer groups, respectively. Specific markers of muscle regeneration (developmental myosin heavy chain, dMHC) and scar formation (collagen I) demonstrated the beneficial effect of the tested therapies in accelerating the muscle healing process in rats. PRP and exercise treatments stimulated the growth of newly formed regenerating muscle fibers (1.5-, 2-, and 2.5-fold increase in myofiber cross-sectional area in PRP, PRP-Exer, and Exer groups, respectively) and reduced scar formation in injured skeletal muscle (20%, 34%, and 41% of reduction in PRP, PRP-Exer, and Exer groups, respectively). Exercise-treated muscles (PRP-Exer and Exer groups) had significantly reduced percentage of dMHC-positive regenerating fibers (35% and 47% decrease in dMHC expression, respectively), indicating that exercise therapies accelerated the muscle healing process witnessed by the more rapid replacement of the embryonic-developmental myosin isoform by mature muscle myosin isoforms. Intramuscular PRP injection and, especially, treadmill exercise improve histological outcome and force recovery of the injured skeletal muscle in a rat injury model that imitates sports-related muscle injuries in athletes. However, there was not a synergistic effect when both treatments were combined, suggesting that PRP does not add any beneficial effect to exercise-based therapy in the treatment of injured skeletal muscle. This study demonstrates the efficacy of an early active rehabilitation protocol or single intramuscular PRP injection on muscle recovery. The data also reveal that the outcome of the early active rehabilitation is adversely affected by the PRP injection when the two therapies are combined, and this could explain why PRP therapies have failed in randomized clinical trials where the athletes have adhered to postinjection rehabilitation protocols based on the principle of early, active mobilization. |
Serum C-peptide concentrations poorly phenotype type 2 diabetic end-stage renal disease patients.
A homogeneous patient population is necessary to identify genetic factors that regulate complex disease pathogenesis. In this study, we evaluated clinical and biochemical phenotyping criteria for type 2 diabetes in end-stage renal disease (ESRD) probands of families in which nephropathy is clustered. C-peptide concentrations accurately discriminate type 1 from type 2 diabetic patients with normal renal function, but have not been extensively evaluated in ESRD patients. We hypothesized that C-peptide concentrations may not accurately reflect insulin synthesis in ESRD subjects, since the kidney is the major site of C-peptide catabolism and would poorly correlate with accepted clinical criteria used to classify diabetics as types 1 and 2. Consenting diabetic ESRD patients (N = 341) from northeastern Ohio were enrolled. Clinical history was obtained by questionnaire, and predialysis blood samples were collected for C-peptide levels from subjects with at least one living diabetic sibling (N = 127, 48% males, 59% African Americans). Using clinical criteria, 79% of the study population were categorized as type 1 (10%) or type 2 diabetics (69%), while 21% of diabetic ESRD patients could not be classified. In contrast, 98% of the patients were classified as type 2 diabetics when stratified by C-peptide concentrations using criteria derived from the Diabetes Control and Complications Trial Research Group (DCCT) and UREMIDIAB studies. Categorization was concordant in only 70% of ESRD probands when C-peptide concentration and clinical classification algorithms were compared. Using clinical phenotyping criteria as the standard for comparison, C-peptide concentrations classified diabetic ESRD patients with 100% sensitivity, but only 5% specificity. The mean C-peptide concentrations were similar in diabetic ESRD patients (3.2 +/- 1.9 nmol/L) and nondiabetic ESRD subjects (3.5 +/- 1.7 nmol/L, N = 30, P = NS), but were 2.5-fold higher compared with diabetic siblings (1.3 +/- 0.7 nmol/L, N = 30, P < 0.05) with normal renal function and were indistinguishable between type 1 and type 2 diabetics. Although 10% of the diabetic ESRD study population was classified as type 1 diabetics using clinical criteria, only 1.5% of these patients had C-peptide levels less than 0.20 nmol/L, the standard cut-off used to discriminate type 1 from type 2 diabetes in patients with normal renal function. However, the criteria of C-peptide concentrations> 0.50 nmol/L and diabetes onset in patients who are more than 38 years old identify type 2 diabetes with a 97% positive predictive value in our ESRD population. Accepted clinical criteria, used to discriminate type 1 and type 2 diabetes, failed to classify a significant proportion of diabetic ESRD patients. In contrast to previous reports, C-peptide levels were elevated in the majority of type 1 ESRD diabetic patients and did not improve the power of clinical parameters to separate them from type 2 diabetic or nondiabetic ESRD subjects. Accurate classification of diabetic ESRD patients for genetic epidemiological studies requires both clinical and biochemical criteria, which may differ from norms used in diabetic populations with normal renal function. |
[Arteriovenous fistula stenosis: diagnosis and radiotherapy].
Dialysis outcome is strongly affected by the function of the vascular access (VA). Thrombosis occurs as a result of decreased vascular access flow caused by a progressive stenosis of the access venous outflow tract. Applying a periodic measurement of recirculation with Blood Temperature Monitor (BTM-Fresenius Medical Care) every three months is it possible to prevent thrombosis and to avoid unnecessary expenses and a time-consuming procedure. Using BTM, incorporated in Fresenius 4008 S machines, during dialysis procedure, we measured the recirculation on AV fistulas. The temperature bolus (thermodilution method) is produced by a temporary change in dialysate temperature (typically about 2,5 degrees C for 2,5 minutes). The measurement is initiated by pressing a single key. The result is available in 5-6 minutes. When recirculation measurement was greater than the threshold of 10%, we repeated the measurement on two next consecutive dialysis. If they were positive, a patient referred for Doppler evaluation, to elective fistulogram, or both. The patients with hemodynamically significant stenosis undergo angioplasty. Over the period of 42 months 591 measurements were obtained in 44 patients (22M; 22F), mean patient age 62,3 (61,5M; 63,3F) years. All patients (100%) are having native AVF at their arms. In the observed period we found 22 suspected AVE After further evaluation we found 20 stenosis in 11 patients (4M; 4F). We performed 13 PTA without and 7 with stent placement. In 2 fistulas angiographies didn't confirm our suspicion, but they thrombosed after 3,7 (1-6,5) months, in average. Three fistulas thrombosed, in spite of a normal recirculation, two after the collapse caused by symptomatic hypotension, and one after the intensive physical work. In 4 patients (2M; 2F) we found 1-4 restenoses after percutaneous procedures. Restenoses were treated by PTA again. They occurs after 9,4 (2,5-17,5) months, in average. Our results in finding stenosis and restenoses confirm that with three months measurements only a few stenoses will be unrecognised and rapidly progressed into a thrombosis. BTM is easy, quick and could be done by existing staff during every dialysis procedure, non-invasive, without blood sampling or indicator injections, without treatment interruption and discomfort or stress for the patient. Venography or Duplex sonography was used to confirm the lesions. PTA with or without stent placement is safe, simple, and efficacious, with rare complications. BTM measurements are sufficiently reproducible and offer the opportunity to extend access monitoring to all haemodialysis patients. We propose to screen well functioning accesses every three months, accesses that are problematic or had a history of previous stenosis every 4 weeks. For now, accesses detected by BTM can be then examined by Venography or Duplex sonography. Screening with recirculation appears to enable earlier detection and therapy. |
Cross-sectional survey on Toxoplasma gondii infection in cattle, sheep and pigs in Serbia: seroprevalence and risk factors.
Toxoplasmosis is a globally distributed zoonosis with a clinical impact in the unborn fetus and in the immunosuppressed individual. In Serbia, studies of risk factors for Toxoplasma gondii infection in humans have shown that the relatively high prevalence is associated mainly with consumption of undercooked meat and/or meat products. However, data on T. gondii infection in domestic animals mostly used for human consumption are scarce. We thus conducted a cross-sectional survey on the seroprevalence of T. gondii infection in a representative sample of cattle, sheep and pigs from different regions of Serbia between June 2002 and June 2003, and analyzed the main risk factors associated with the infection. Sera from 611 cattle (yearlings and adults of both sexes), 511 ewes, and 605 pigs (market-weight and sows), were examined for T. gondii antibodies by the modified agglutination test. The seroprevalences determined were 76.3% in cattle, 84.5% in sheep and 28.9% in pigs. The antibody levels ranged from 1:25 to 1:400 in cattle, and up to 1:25,600 in sheep and to 1:12,800 in pigs. Among the seropositive, the proportion of high antibody levels (> or =1:1600), suggestive of acute infection, was 10% in sheep, and 4% in pigs. Possible association of the infection with biologically plausible risk factors including gender, age, herd size/farm type, type of housing, feeding practices and region, was analyzed by univariate analysis, and variables significant at P< or =0.1 were included in multivariate logistic regression models. The results showed that risk factors for cattle were small herd size (odds ratio, OR=2.19, 95% confidence interval, CI=1.28-3.75, P=0.004) and farm location in Western Serbia (OR=2.04, 95% CI=1.10-3.79, P=0.024), while housing in stables with access to outside pens was protective (OR=0.37, 95% CI=0.21-0.67, P=0.001). In sheep, an increased risk of infection was found in ewes from state-owned flocks (OR=4.18, 95% CI=2.18-8.00, P<0.001) vs. private flocks, and, interestingly, also in those from Western Serbia (OR=4.66, 95% CI=1.18-18.32, P=0.028). In pigs, the risk of infection was highly increased in adult animals (OR=3.87, 95% CI=2.6-5.76, P<0.001), as well as in those from finishing type farms (OR=3.96, 95% CI=1.97-7.94, P<0.001). In addition to providing data on the current T. gondii seroprevalence in meat animals in Serbia, the results of this study show the main risk factors associated with infection, thereby pointing to the type of preventive measures to reduce T. gondii infection. |
[Brain concussion--a minor craniocerebral injury].
Brain concussion is a brain dysfunction without any macroscopic structural damage, caused by mechanical force. This research paper presents the occurrence and basic characteristics of patients with brain concussion without skull fracture. The second aim of this paper is to answer questions, related to this problem, neurosurgeons are most often asked by doctors of other specialties. Posttraumatic amnesia (patient unable to remember events before and/or after injury) was a condition to diagnose the brain concussion. In 1995 there were 240 patients with brain concussions without skull fracture at the Department of Urgent Surgery of our Institute. Eighty of them (33%) have been admitted to the Neurosurgical Clinic for observation and/or treatment. In all patients with brain concussion the following diagnostic procedure was applied: personal history, physical and neurological examination, basic blood tests and skull x-rays. CT imaging of the brain is not a routine because of our economic and technical circumstances. 240 patients were examined; 67% were males. Glasgow coma score (GSC) was 13-15 in all patients, while in nonhospitalized patients it was 15 (GSC = 15). 54% of patients were 15-40 years old; 35% were 41-60 years old and 11% were older than 60 years of age. Average hospitalization lasted for 3.48 days. According to the Glasgow outcome scale all patients had a good recovery. Patients with brain concussion have always amnesia with normal neurologic status. Legal and clinical definition of the minor head injury are not completely equal. Brain concussion is legally always a minor head injury. Patients with organic damage of brain (legally severe injury) can clinically look like having minor injury initially or till the end of the illness. Risk for brain damage in patients with amnesia is about 3%. Posttraumatic amnesia is always established by asking patients to remember events and not asking them if they were unconscious. Brain concussion is often associated with headache, vegetative or/and psychotic difficulties. Diagnostic protocol should comprise at least personal history, physical and neurological examination and skull x-ray. Consultation of a neurosurgeon and hospitalization are not indicated in all cases. In our series it was done in 33% according to indications which are established. In these cases patients should be transported with documents describing the type of injury, diagnostic results and treatment performed. The therapy is symptomatic. After brain concussion gradual return to everyday activities is indicated. Sick leave of 7-10 days is usually sufficient. Postconcussion syndrome (headache, vegetative or psychotic disturbances) occurs often and may last for a long period of time. We tried to describe a doctrine for diagnostic and treatment of patients suffering from brain concussion most appropriate according to our technical and economical circumstances. |
Thyroid hormone replacement therapy.
Thyroid hormone replacement has been used for more than 100 years in the treatment of hypothyroidism, and there is no doubt about its overall efficacy. Desiccated thyroid contains both thyroxine (T(4)) and triiodothyronine (T(3)); serum T(3) frequently rises to supranormal values in the absorption phase, associated with palpitations. Liothyronine (T(3)) has the same drawback and requires twice-daily administration in view of its short half-life. Synthetic levothyroxine (L-T(4)) has many advantages: in view of its long half-life, once-daily administration suffices, the occasional missing of a tablet causes no harm, and the extrathyroidal conversion of T(4) into T(3) (normally providing 80% of the daily T(3) production rate) remains fully operative, which may have some protective value during illness. Consequently, L-T(4) is nowadays preferred, and its long-term use is not associated with excess mortality. The mean T(4) dose required to normalize serum thyroid stimulating hormone (TSH) is 1.6 microg/kg per day, giving rise to serum free T(4) (fT(4)) concentrations that are slightly elevated or in the upper half of the normal reference range. The higher fT(4) values are probably due to the need to generate from T(4) the 20% of the daily T(3) production rate that otherwise is derived from the thyroid gland itself. The daily maintenance dose of T(4) varies widely between 75 and 250 microg. Assessment of the appropriate T(4) dose is by assay of TSH and fT(4), preferably in a blood sample taken before ingestion of the subsequent T(4) tablet. Dose adjustments can be necessary in pregnancy and when medications are used that are known to interfere with the absorption or metabolism of T(4). A new equilibrium is reached after approximately 6 weeks, implying that laboratory tests should not be done earlier. With a stable maintenance dose, an annual check-up usually suffices. Accumulated experience with L-T(4) replacement has identified some areas of concern. First, the bioequivalence sometimes differs among generics and brand names. Second, many patients on T(4) replacement have a subnormal TSH. TSH values of < or =0.1 mU/l carry a risk of development of atrial fibrillation and are associated with bone loss although not with a higher fracture rate. It is thus advisable not to allow TSH to fall below--arbitrarily--0.2 mU/l. Third, recent animal experiments indicate that only the combination of T(4) and T(3) replacement, and not T(4) alone, ensures euthyroidism in all tissues of thyroidectomized rats. It is indeed the experience of many physicians that there exists a small subset of hypothyroid patients who, despite biochemical euthyroidism, continue to complain of tiredness, lack of energy, discrete cognitive disorders and mood disturbances. As organs vary in the extent to which their T(3) content is derived from serum T(3) or locally produced T(3) from T(4), these complaints may have a biologic substrate; for example, brain T(3) content is largely determined by local deiodinase type II activity. Against this background it is of interest that a number of psychometric scores improved significantly in hypothyroid patients upon substitution of 50 microg of their T(4) replacement dose by 12.5 microg T(3). Confirmatory studies on this issue are urgently awaited. It could well be that a slow-release preparation containing both T(4) and T(3) might improve the quality of life, compared with T(4) replacement alone, in some hypothyroid patients. |
Ventricular proarrhythmic effects of ventricular cycle length and shock strength in a sheep model of transvenous atrial defibrillation.
Synchronized cardioversion is generally accepted as safe for the treatment of ventricular tachycardia and atrial fibrillation when shocks are synchronized to the R wave and delivered transthoracically. However, others have shown that during attempted transvenous cardioversion of rapid ventricular tachycardia, ventricular fibrillation (VF) may be induced. It was our objective to evaluate conditions (short and irregular cycle lengths [CL]) under which VF might be induced during synchronized electrical conversion of atrial fibrillation with transvenous electrodes. In 16 sheep (weight, 62 +/- 7.8 kg), atrial defibrillation thresholds (ADFT) were determined for a 3-ms/3-ms biphasic shock delivered between two catheters each having 6-cm coil electrodes, one in the great cardiac vein under the left atrial appendage and one in the right atrial appendage along the anterolateral atrioventricular groove. A hexapolar mapping catheter was positioned in the right ventricular apex for shock synchronization. In 8 sheep (group A), a shock intensity 20 V less than the ADFT was used for testing, and in the remaining 8 sheep (group B), a shock intensity of twice ADFT was used. With a modified extrastimulus technique, a basic train of eight stimuli alone (part 1) and with single (part 2) and double (part 3) extrastimuli were applied to right ventricular plunge electrodes. Atrial defibrillation shocks were delivered synchronized to the last depolarization. In part 4, shocks were delivered during atrial fibrillation. The preceding CL was evaluated over a range of 150 to 1000 milliseconds. Shocks were also delayed 2, 20, 50, and 100 milliseconds after the last depolarization from the stimulus (parts 1 through 3) or intrinsic depolarization (part 4). The mean ADFT for group A was 127 +/- 48 V, 0.71 +/- 0.60 J and for group B, 136 +/- 37 V, 0.79 +/- 0.42 J (NS, P > .15). Of 1870 shocks delivered, 11 episodes of VF were induced. Group A had no episodes of VF in part 1, two episodes of VF in part 2 (CL, 240 and 230 milliseconds with 2-millisecond delay), and one episode each in parts 3 (CL, 280 milliseconds with 2-millisecond delay) and 4 (CL, 240 milliseconds with 100-millisecond delay). Group B had two episodes in part 1 (CL, 250 and 300 milliseconds with 20-millisecond delay), three episodes in part 2 (CL, 230, 230, and 250 milliseconds with 2-millisecond delay), and one episode each in parts 3 (CL, 260 milliseconds with 2-millisecond delay) and 4 (198 milliseconds with 100-millisecond delay). No episodes of VF were induced for shocks delivered after a CL > 300 milliseconds. Synchronized transvenous atrial defibrillation shocks delivered on beats with a short preceding ventricular cycle length (< 300 milliseconds) are associated with a significantly increased risk of initiation of VF. To decrease the risk of ventricular proarrhythmia, short CLs should be avoided. |
The failure of placebo-controlled studies. ECNP Consensus Meeting, September 13, 1997, Vienna. European College of Neuropsychopharmacology.
In recent years an increasing number of clinical trials to test the efficacy of new potential treatments have failed to demonstrate a difference from placebo for either the new treatment or for an established reference drug. A rise in the response rate to placebo observed in a range of psychiatric disorders has not been paralleled by a rise in the response to drug and small effect sizes make it difficult to establish significant differences. A number of factors are thought to contribute to the rising placebo response or the smaller effect sizes. These include differences over time in the populations studied, changes in investigator behaviour, and failures of trial design. The inclusion of a greater number of patients with mild disorder or whose disorder has a fluctuating course is thought likely to increase the placebo response rates. Close attention needs to be paid to patient selection in terms of diagnosis, severity and absence of confounding comorbidity such as alcoholism, personality disorders or brief depression. The rising placebo response is associated with an increasing variability of placebo response seen in some centres. The ability of some centres to select appropriate patients for studies to demonstrate a separation of reference treatment and placebo and the inability of other centres suggests that a more careful selection of investigators is important. Selection should be based on their experience, their record from previous studies, and their aptitude for being trained. The inclusion of a reference treatment arm provides a useful means to judge the performance of individual centres. The exclusion of eccentric centres that fail to reach predetermined performance criteria, such as a failure to separate reference treatment from placebo, may be considered. Trial designs need to qualify adequately the study population and pay sufficient attention to diagnosis, minimum severity and comorbidity at entry. Greater care is needed in excluding concomitant overt or covert psychotherapy and in reducing the unnecessary therapeutic contact that has been increased unwittingly by some protocols. Identifying patients with prior stability of illness, with clear disability, and with a minimum severity at entry is likely to lower the placebo response substantially and increase the effect size and power of the study. The possible influence of comedication should also be considered. The inclusion of a placebo run in period is considered unhelpful and the use of better statistical techniques should be adopted to maximise the sensitivity of the study and increase the chances of testing efficacy. |
Usefulness of 3'-[F-18]fluoro-3'-deoxythymidine with positron emission tomography in predicting breast cancer response to therapy.
The usefulness of 2-deoxy-2-[F-18]fluoro-D-glucose (FDG)-positron emission tomography (PET) in monitoring breast cancer response to chemotherapy has previously been reported. Elevated uptake of FDG by treated tumors can persist however, particularly in the early period after treatment is initiated. 3'-[F-18]Fluoro-3'-deoxythymidine (FLT) has been developed as a marker for cellular proliferation and, in principle, could be a more accurate predictor of the long-term effect of chemotherapy on tumor viability. We examined side-by-side FDG and FLT imaging for monitoring and predicting tumor response to chemotherapy. Fourteen patients with newly diagnosed primary or metastatic breast cancer, who were about to commence a new pharmacologic treatment regimen, were prospectively studied. Dynamic 3-D PET imaging of uptake into a field of view centered over tumor began immediately after administration of FDG or FLT (150 MBq). After 45 minutes of dynamic acquisition, a clinically standard whole-body PET scan was acquired. Patients were scanned with both tracers on two separate days within one week of each other (1) before beginning treatment, (2) two weeks following the end of the first cycle of the new regimen, and (3) following the final cycle of that regimen, or one year after the initial PET scans, whichever came first. (Median and mean times of early scans were 5.0 and 6.6 weeks after treatment initiation; median and mean times for late scans were 26.0 and 30.6 weeks after treatment initiation.) Scan data were analyzed on both tumor-by-tumor and patient-by-patient bases, and compared to each patient's clinical course. Mean change in FLT uptake in primary and metastatic tumors after the first course of chemotherapy showed a significant correlation with late (av. interval 5.8 months) changes in CA27.29 tumor marker levels (r = 0.79, P = 0.001). When comparing changes in tracer uptake after one chemotherapy course versus late changes in tumor size as measured by CT scans, FLT was again a good predictor of eventual tumor response (r = 0.74, P = 0.01). Tumor uptake of FLT was near-maximal by 10 minutes after injection. The time frame five to 10 minutes postinjection of FLT produced standardized uptake value (SUV) values highly correlated with SUV values obtained after 45-minute uptake (r = 0.83, P < 0.0001), and changes in these early SUVs after the first course of chemotherapy correlated with late changes in CA27.29 (r = 0.93, P = 0.003). A 10-minute FLT-PET scan acquired two weeks after the end of the first course of chemotherapy is useful for predicting longer-term efficacy of chemotherapy regimens for women with breast cancer. |
Implications for clinical staging of metastatic cutaneous squamous carcinoma of the head and neck based on a multicenter study of treatment outcomes.
Cutaneous squamous cell carcinoma (SCC) of the head and neck is a common cancer that has the potential to metastasize to lymph nodes in the parotid gland and neck. Previous studies have highlighted limitations with the current TNM staging system for metastatic skin carcinoma. The aim of this study was to test a new staging system that may provide better discrimination between patient groups. A retrospective multicenter study was conducted on 322 patients from three Australian and three North American institutions. All had metastatic cutaneous SCC involving the parotid gland and/or neck and all were treated for cure with a minimum followup time of 2 years. These patients were restaged using a newly proposed system that separated parotid disease (P stage) from neck disease (N stage) and included subgroups of P and N stage. Metastases involved the parotid in 260 patients (149 P1; 78 P2; 33 P3) and 43 of these had clinical neck disease also (22 N1; 21 N2). Neck metastases alone occurred in 62 patients (26 N1; 36 N2). Ninety percent of patients were treated surgically and 267 of 322 received radiotherapy. Neck nodes were pathologically involved in 32% of patients with parotid metastases. Disease recurred in 105 (33%) of the 322 patients, involving the parotid in 42, neck in 33, and distant sites in 30. Parotid recurrence did not vary significantly with P stage. Disease-specific survival was 74% at 5 years. Survival was significantly worse for patients with advanced P stage: 69% survival at 5 years compared with 82% for those with early P stage (P = 0.02) and for those with both parotid and neck node involvement pathologically: 61% survival compared with 79% for those with parotid disease alone (P = 0.027). Both univariate and multivariate analysis confirmed these findings. Clinical neck involvement among patients with parotid metastases did not significantly worsen survival (P = 0.1). This study, which included a mixed cohort of patients from six different institutions, provides further information about the clinical behavior of metastatic cutaneous SCC of the head and neck. The hypothesis that separation of parotid and neck disease in a new staging system is supported by the results. The benefit of having subgroups of P and N stage is uncertain, but it is likely to identify patients with unfavorable characteristics that may benefit from further research. |
Growth hormone response to growth hormone-releasing hormone varies with the hypothalamic-pituitary abnormalities.
We determined growth hormone (GH) and insulin-like growth factor I (IGF-I) levels after a 3 h infusion of escalating doses of growth hormone-releasing hormone (GHRH(1-29)) followed by a bolus injection in hypopituitary patients with marked differences in pituitary features at magnetic resonance imaging (MRI) in order to evaluate further the contribution of MRI in the definition of pituitary GH reserve in GH-deficient patients. Twenty-nine patients (mean age 14.5 +/- 4.0 years) were studied. Group I comprised 13 patients: seven with isolated GH deficiency (IGHD) (group Ia) and six with multiple pituitary hormone deficiency (MPHD) (group Ib) who had anterior pituitary hypoplasia, unidentified pituitary stalk and ectopic posterior pituitary at MRI, Group II consisted of eight patients with IGHD and small anterior pituitary/empty sella, while in group III eight had IGHD and normal morphology of the pituitary gland. Growth hormone and IGF-I levels were measured during saline infusion at 08.30-09.00 h, as well as after infusion of GHRH (1-29) at escalating doses for 3h: 0.2 micrograms/kg at 09.00-10.00 h, 0.4 micrograms/kg at 10.00-11.00 h, 0.6 micrograms/kg at 11.00-12.00 h and an intravenous bolus of 2 micrograms/ kg at 12.00 h. In the group I patients, the peak GH response to GHRH(1-29) was delayed (135-180 min) and extremely low (median 2mU/l). In group II it was delayed (135-180 min), high (median 34.8 mU/l) and persistent (median 37.4 mU/l at 185-210 min). In group III the peak response was high (median 30.8 mU/l) and relatively early (75-120 min) but it declined rapidly (median 14.4 mU/l at 185-210 min). In one group I patient, GH response increased to 34.6 mU/l. The mean basal value of IGF-I levels was significantly lower in group I (0.23 +/- 0.05 U/ml) than in groups II (0.39 +/- 0.13U/ ml, p < 0.01) and III (1.54 +/- 0.46 U/ml, p < 0.001) and did not vary significantly during the GHRH(1-29) infusion. The present study demonstrates that the impaired GH response to 3 h of continuous infusion of escalating doses of GHRH(1-29) was strikingly indicative for pituitary stalk abnormality, strengthening the case for use of GHRH in the differential diagnosis of GH deficiency. The low GH response, more severe in MPHD patients, might be dependent on the residual somatotrope cells, while the better response (34.6 mU/l) in the group Ia patients might suggest that prolonged GHRH infusion could help in evaluating the amount of residual GH pituitary tissue. Pituitary GH reserve, given the GH response to GHRH infusion in GH-deficient patients with small anterior pituitary/empty sella, seems to be maintained. |
The synchrony of prostaglandin-induced estrus in cows was reduced by pretreatment with hCG.
The induction of optimal synchrony of estrus in cows requires synchronization of luteolysis and of the waves of follicular growth (follicular waves). The aim of this study was to determine whether hormonal treatments aimed at synchronizing follicular waves improved the synchrony of prostaglandin (PG)-induced estrus. In Experiment 1, cows were treated on Day 5 of the estrous cycle with saline in Group 1 (n = 25; 16 ml, i.v., 12 h apart), with hCG in Group 2 (n = 27; 3000 IU, i.v.), or with hCG and bovine follicular fluid (bFF) in Group 3 (n = 21; 16 ml, i.v., 12 h apart). On Day 12, all cows were treated with prostaglandin (PG; 500 micrograms cloprostenol, i.m.). In Experiment 2, cows were treated on Day 5 of the estrous cycle with saline (3 ml, i.m.) in Group 1 (n = 22) or with hCG (3000 IU, i.v.) in Group 2 (n = 20) and Group 3 (n = 22). On Day 12, the cows were treated with PG (500 micrograms in Groups 1 and 2; 1000 micrograms in Group 3). Blood samples for progesterone (P4) determination were collected on Day 12 (Experiment 1) or on Days 12 and 14 (Experiment 2). Cows were fitted with heat mount detectors and observed twice a day for signs of estrus. Four cows in Experiment 1 (1 cow each from Groups 1 and 2; 2 cows from Group 3) had plasma P4 concentrations below 1 ng/ml on Day 12 and were excluded from the analyses. In Experiment 1, cows treated with hCG or hCG + bFF had a more variable (P = 0.0007, P = 0.0005) day of occurrence of and a longer interval to estrus (5.9 +/- 0.7 d, P = 0.003 and 6.2 +/- 0.8 d, P = 0.005) than saline-treated cows (3.4 +/- 0.4 d). The plasma P4 concentrations on Day 12 were higher (P < 0.0001) in hCG- and in hCG + bFF-treated cows than in saline-treated cows (9.4 +/- 0.75 and 8.5 +/- 0.75 vs 4.1 +/- 0.27 ng/ml), but there was no correlation (P > 0.05) between plasma P4 concentrations and the interval to estrus. In Experiment 2, cows treated with hCG/500PG and hCG/1000PG had a more variable (P = 0.0007, P = 0.002) day of occurrence of and a longer interval to estrus (4.2 +/- 0.4 d, P = 0.04; 4.1 +/- 0.4 d, P = 0.03) than saline/500PG-treated cows (3.2 +/- 0.1 d). The concentrations of plasma P4 on Days 12 and 14 of both hCG/500PG- and hCG/1000PG-treated cows were higher (P < 0.05) than in saline/500PG-treated cows (7.3 +/- 0.64, 0.7 +/- 0.08 and 7.7 +/- 0.49, 0.7 +/- 0.06 vs 5.3 +/- 0.37, 0.5 +/- 0.03 ng/ml). The concentrations of plasma P4 on Days 12 or 14 and the interval to estrus were not correlated (P > 0.05) in any treatment group. The concentrations of plasma P4 on Days 12 and 14 of hCG/500PG- or hCG/1000PG-treated cows were correlated (r = 0.65, P < 0.05; r = 0.50, P < 0.05). This study indicated that treatment of cows with hCG on Day 5 of the estrous cycle reduced the synchrony of PG-induced estrus and that this reduction was not due to the failure of luteal regression. |
Postmenopausal status, hypertension and obesity as risk factors for malignant transformation in endometrial polyps.
We analyzed clinical data and pathological features of six cases of malignant endometrial polyps, to compare these with other examples reported in literature and to define the features of endometrial cancer arising in polyps. Moreover, to clarify the mechanisms of carcinogenesis in malignant endometrial polyps we examined the expression of cyclooxygenase-2 (COX-2), P53 and Ki 67 and their relationships with clinicopathologic characteristics. The surgical pathology files of the Pathology Department of Parma University were searched for cases of endometrial polyps with nests of endometrial carcinomas, from the years 2002-2005. Clinical records, histological slides of endometrial curetting, hysterectomy with salpingo-oophorectomy specimens and pelvic lymph nodes were reviewed in each case. The main pathological features analyzed were histological types of endometrial cancer and the stage of development of neoplasm. The presence of other malignancies in the genital tract were also considered. Immunohistochemical staining was done using antibodies COX-2, p53 and Ki 67. In our study, all malignant endometrial polyps had been detected in postmenopausal women. The majority of our patients with malignant endometrial polyps had risk factors for the development of endometrial carcinoma such as hypertension, obesity and unopposed estrogen therapy. Unlike other studies, no patients had a history of previous breast carcinoma and Tamoxifen treatment. The most common subtypes of endometrial carcinoma in malignant polyps are endometrioid carcinoma and serous papillary carcinoma. Endometrial carcinoma arising in endometrial polyps is an early endometrial carcinoma with good prognosis, except for papillary serous carcinoma, which can be associated with multiple omental involvement, despite low stage of development in the uterus. Immunohistochemical study showed that COX-2 expression was found in cytoplasm of tumor cells and this was elevated in all cases, independently of the grade and the stage of development of the malignancy, histological subtype and deep invasion of myometrium. P53 and Ki 67 expression, detected in the nuclei of neoplastic cells, was not correlated with COX-2 immunoreactivity, but these markers were associated with more advanced stage, grading, and histologic subtypes of tumor. Postmenopausal status, hypertension, obesity could all be considered as risk factors for carcinomatous transformation within endometrial polyps in women without a history of breast carcinoma and Tamoxifen treatment. However, our series is small (only six cases considered) and further studies are necessary to confirm this hypothesis. In the current study, immunohistochemical data reveal that COX-2 expression may be associated with the carcinogenesis in endometrial carcinomas arising in endometrial polyps, but this antibody is not correlated with tumor aggressiveness, P53 and Ki 67 expression. P53 and Ki 67 overexpression, instead, are associated with advanced stage, histologic subtype and deep myometrial invasion of neoplasm. |
[Refractive and biometric changes in adolescent guinea pig eyes in development and recover stages of form-deprivation myopia].
To investigate changes in refraction and vitreous length during form-deprivation and visual re-exposure in guinea pig eyes. It was an experimental study. Ninety-six guinea pigs with age of three weeks were randomly divided into form-deprivation and normal control groups (n = 48 in each group). The form-deprivation group was further divided into 4 subgroups (n = 12 in each subgroup) which underwent monocular form-deprivation for 1, 2, 4, and 6 weeks, respectively. At the end of each time point, the form-deprived eyes in all animals were visually re-exposed and followed for 3 (n = 6) and 7 days (n = 6). The control group was also divided into four subgroups (n = 12 in each subgroup) to match the time-points of the form-deprivation group. During form-deprivation and recovery, vitreous length and refraction in each group was measured and compared. There was significant difference in vitreous length (F = 6.108, 28.222, 19.195) and refraction (F = 12.504, 15.003, 6.829) when compared deprived eyes with contralateral eyes 2, 4, or 6 weeks after form-deprivation (P < 0.05). Difference in refraction between deprived eyes and contralateral eyes was -2.36 D, -3.64 D and -3.68 D at 2, 4, 6 week, respectively. Difference in vitreous length was 0.08 mm, 0.19 mm and 0.22 mm. During visual re-exposure, form-deprived eyes changed into hyperopia as compared with contralateral eyes. At day 3 point, there was no significant difference in refraction and vitreous length between form deprived eyes and contralateral eyes in 1 week and 2 weeks groups (F = 0.032, 0.280; P > 0.05). After 7 days recovery, vitreous length and refraction in deprived eyes almost backed to level of contralateral eyes in 1 and 2 weeks groups. At day 3 point, there was significant difference of refraction and vitreous length between form-deprived eyes and contralateral eyes in 4 weeks group and 6 weeks group. After 7 days recovery, there was significant difference in vitreous length for 4 weeks group and there was significant difference in both refraction and vitreous length for 6 weeks group (F = 4.108, 6.317; P < 0.05). Form-deprivation causes myopic changes in deprived eyes, during visual re-exposure the refraction recovers and the extent depends on the length of form-deprivation. The recovery rate is faster during the first 3 days and then slower after 3 days. The mechanism of form-deprivation myopia in guinea pig eyes is similar to that of myopia in juvenile human beings. |
Subunit structure of deglycosylated human and swine trachea and Cowper's gland mucin glycoproteins.
The oligosaccharide chains in human and swine trachea and Cowper's gland mucin glycoproteins were completely removed in order to examine the subunit structure and properties of the polypeptide chains of these glycoproteins. The carbohydrate, which constitutes more than 70% of these glycoproteins, was removed by two treatments with trifluoromethanesulfonic acid for 3 h at 3 degrees and periodate oxidation by a modified Smith degradation. All of the sialic acid, fucose, galactose, N-acetylglucosamine and N-acetylgalactosamine present in these glycoproteins was removed by these procedures. The deglycosylated polypeptide chains were purified and characterized. The size of the monomeric forms of all three polypeptide chains were very similar. Data obtained by gel filtration, release of amino acids during hydrolysis with carboxypeptidase B and gel electrophoresis in the presence of 0.1% dodecyl sulfate showed that a major fraction from each of the three mucin glycoproteins had a molecular size of about 67 kDa. All of the deglycosylated chains had a tendency to aggregate. Digestion with carboxypeptidases showed that human and swine trachea mucin glycoproteins had identical carboxyl terminal sequences, -Val-Ala-Phe-Tyr-Leu-Lys-Arg-COOH. Cowper's gland mucin glycoprotein had a similar carboxyl terminal sequence, -Val-Ala-Tyr-Leu-Phe-Arg-Arg-COOH. The yield of amino acids after long periods of hydrolysis with carboxypeptidases showed that at least 85% of the polypeptide chains in each of the deglycosylated preparations have these sequences. These results suggested that the polypeptide chains in these deglycosylated mucin glycoprotein preparations were relatively homogeneous. The deglycosylated polypeptide chains as well as the intact mucin glycoproteins had blocked amino terminii. The purified polypeptide chains were digested with trypsin-TCPK, and S. aureus V8 protease and the resulting peptides were isolated by gel electrophoresis in the presence of 0.1% dodecyl sulfate and by HPLC. Two partial amino acid sequences from swine trachea mucin glycoprotein, two partial sequences from human trachea mucin glycoprotein and three partial sequences from Cowper's gland mucin glycoprotein were determined. The partial amino acid sequences of the peptides isolated from swine trachea mucin glycoprotein showed more than 70% sequence homology to a repeating sequence present in porcine submaxillary mucin glycoprotein. Five to eight immunoprecipitable bands with sizes ranging from about 40 kDa to 46 kDa were seen when the polypeptide chains were digested with S. aureus V8 protease. All of the bands had blocked amino terminii and differed by a constant molecular weight of about 1.5 kDa.(ABSTRACT TRUNCATED AT 400 WORDS) |
Diminutive Polyps With Advanced Histologic Features Do Not Increase Risk for Metachronous Advanced Colon Neoplasia.
With advances in endoscopic imaging, it is possible to differentiate adenomatous from hyperplastic diminutive (1-5 mm) polyps during endoscopy. With the optical Resect-and-Discard strategy, these polyps are then removed and discarded without histopathology assessment. However, failure to recognize adenomas (vs hyperplastic polyps), or discarding a polyp with advanced histologic features, could result in a patient being considered at low risk for metachronous advanced neoplasia, resulting in an inappropriately long surveillance interval. We collected data from international cohorts of patients undergoing colonoscopy to determine what proportion of patients are high risk because of diminutive polyps advanced histologic features and their risk for metachronous advanced neoplasia. We collected data from 12 cohorts (in the United States or Europe) of patients undergoing colonoscopy after a positive result from a fecal immunochemical test (FIT cohort, n = 34,221) or undergoing colonoscopies for screening, surveillance, or evaluation of symptoms (colonoscopy cohort, n = 30,123). Patients at high risk for metachronous advanced neoplasia were defined as patients with polyps that had advanced histologic features (cancer, high-grade dysplasia, ≥25% villous features), 3 or more diminutive or small (6-9 mm) nonadvanced adenomas, or an adenoma or sessile serrated lesion ≥10 mm. Using an inverse variance random effects model, we calculated the proportion of diminutive polyps with advanced histologic features; the proportion of patients classified as high risk because their diminutive polyps had advanced histologic features; and the risk of these patients for metachronous advanced neoplasia. In 51,510 diminutive polyps, advanced histologic features were observed in 7.1% of polyps from the FIT cohort and 1.5% polyps from the colonoscopy cohort (P = .044); however, this difference in prevalence did not produce a significant difference in the proportions of patients assigned to high-risk status (0.8% of patients in the FIT cohort and 0.4% of patients in the colonoscopy cohort) (P = .25). The proportions of high-risk patients because of diminutive polyps with advanced histologic features who were found to have metachronous advanced neoplasia (17.6%) did not differ significantly from the proportion of low-risk patients with metachronous advanced neoplasia (14.6%) (relative risk for high-risk categorization, 1.13; 95% confidence interval 0.79-1.61). In a pooled analysis of data from 12 international cohorts of patients undergoing colonoscopy for screening, surveillance, or evaluation of symptoms, we found that diminutive polyps with advanced histologic features do not increase risk for metachronous advanced neoplasia. |
Effects of microRNA-21 on Nerve Cell Regeneration and Neural Function Recovery in Diabetes Mellitus Combined with Cerebral Infarction Rats by Targeting PDCD4.
We aimed to determine the effect and mechanism of microRNA-21 (miR-21) on nerve cell regeneration and nerve functional recovery in diabetes mellitus combined with cerebral infarction (DM + CI) rats by targeting PDCD4. A total of 125 male Wistar rats were selected for DM + CI rat model construction and assigned into the blank, miR-21 mimics, mimics control, miR-21 inhibitor, inhibitor control, miR-21 inhibitor + si-PDCD4 and si-PDCD4 groups. And, 20 healthy rats were selected for the normal group. Triphenylterazolium chloride (TTC) staining and HE staining were used for determination of the area of CI and pathological changes, respectively. Behaviors of rats in the eight groups were determined by forelimb placement test and balance beam walking test. Immunohistochemical staining, double immunofluorescence staining assay, Western blotting, and qRT-PCR were used to detect expressions of miR-21, PDCD4, HNA, Nestin, NeuN, β-III-Tub, PTEN, FasL, and GFAP. DNA laddering and TUNEL staining was used for cell apoptosis. TTC and HE staining confirmed that 87.5% rats were induced into CI + DM models successfully. Results of forelimb placement test and balance beam walking test showed that miR-21 mimics, and si-PCDC4 improved the nerve defect of model rats. Comparing with the blank group at the same time, rats in the miR-21 inhibitor group displayed significant decrease in the forelimb placement test score, significant increase in the balance beam walking test score, and exacerbation of nerve defect, while rats in the miR-21 mimics and si-PCDC4 groups displayed significant increase in forelimb placement test score and significant decrease in the balance beam walking test score and improvement of nerve defect situation. The HNA, Nestin, and PDCD4 expressions were decreased and the NeuN, β-III-Tub, and GFAP expressions were increased in the miR-21 mimics and si-PDCD4 groups comparing with the blank group. The results of miR-21 inhibitor group were on the contrary. In comparison to the blank group, the miR-21 mimics group and the si-PDCD4 had lower miR-21 expressions and higher expressions of PDCD4, PTEN, and FasL, while the miR-21 inhibitor group was in the opposite trend. The results of qRT-PCR were the same with Western blotting. The expressions of fluorescence in other groups were higher than the normal group; compared with the blank group, the miR-21 mimics group and the si-PDCD4 group had lower fluorescence expression and DNA ladder. However, the fluorescence expressions and DNA ladder of miR-21 inhibitor group increased markedly in contrast with the blank group. Comparing with the blank group, BrdU+/DEX+ fluorescence intensity significantly enhanced in the miR-21 mimics and si-PDCD4 groups and significantly reduced in the miR-21 inhibitor group. And, comparing with the blank group, in the miR-21 mimics group, the signal strength of luciferase carrying the wild-type PDCD4 was reduced by 25%. The present studies demonstrated that miR-21 could promote the nerve cell regeneration, suppress apoptosis of nerve cells in DM + CI rats and improves the nerve defect situation of DM + CI rats by inhibiting PDCD4. |
[Establishment of a prognostic nomogram to predict long-term survival in non-metastatic colorectal cancer patients].
To establish a nomogram to predict long-term survival in non-metastatic colorectal cancer patients. A retrospective analysis was conducted in patients with non-metastatic colorectal cancer who underwent radical surgery in the Department of Colorectal Surgery of Affiliated Union Hospital of Fujian Medical University between January 2000 and December 2014. Univariate and multivariate analyses on disease-free survival (DFS) were performed using the Cox proportional regression model. Based on the multivariate analysis results, a prognostic nomogram was formulated to predict the probability for DFS. Concordance index was applied in predictive evaluation of the nomogram and calibration curves were drawn to test the nomogram's prediction and actual observation of the 5-year DFS rate. The predictive ability of nomogram was compared with AJCC-7 staging system. A total of 2 641 patients were identified. The median age was 59.3 years old, and 60.3% of cases were men. The number of patients with TNM stage 0, I(, II( and III( was 96, 505, 923 and 1043, respectively. The most common tumor site was the rectum, accounting for 43.2%. A total of 413 (15.6%) patients underwent neoadjuvant treatment. The most common gross type of tumor was ulcerative type, accounting for 79.5%. The 3- and 5-year DFS rate was 85.8% and 79.8%, respectively. Based on the Cox proportional regression model, the following six factors were independently associated with reduced DFS rate and were selected for the nomogram: older age, higher pathologic T stage, higher pathologic N stage, higher preoperative serum CEA level, infiltrative gross type and perineural invasion. The results of the nomogram showed that the score of T0, T1, T2, T3 and T4 stage was 0, 2.2, 3.9, 4.1 and 6, respectively, and the score of N0, N1 and N2 was 0, 3.8 and 9.3, respectively. For gross type, the score of expanding type, ulcerative type and infiltrative type was 6, 9 and 10, respectively. The score of perineural invasion was 5.2. Higher scores were added to older age and higher CEA level. The total scores were calculated by taking the sum of the points from all predictors. Higher total score was associated with poor DFS. The prognostic nomogram differentiated well and showed a concordance index of 0.718, which was better than AJCC-7 staging system (concordance index=0.683). Also, the calibration of nomogram predictions was good. A nomogram based on 6 independently prognostic factors to predict long-term survival in non-metastatic colorectal cancer patients is established successfully. The nomogram can be conveniently used to facilitate the accurate individualized prediction of DFS rates in patients with non-metastatic colorectal cancer. |
Unmet needs for family planning among married women aged 15-49 years living in two settlements with different socioeconomic and cultural characteristics: a cross-sectional study from Karabuk Province in Turkey.
The aim of the study was to investigate levels and related factors of the unmet needs for family planning among married women aged 15-49 years living in two settlements (rural and urban) having different economic, social and cultural structures in Karabuk, a province in north-western Turkey. This cross-sectional study was conducted in the rural Cumayani village and the urban Emek neighbourhood between October 2016 and June 2017. The sample size was determined to be 289 married women aged 15-49 years from each settlement according to the effect size of 0.3, alpha error probability of 0.05 and power of 0.95. In the study, 594 currently married women (298 from Cumayani and 296 from Emek) were contacted. The dependent variable was the level of unmet need for family planning. The independent variables included the sociodemographic and reproductive characteristics of the women. The data were collected through face-to-face interviews. The characteristics of the two settlements were compared using the χ2 test. Bivariate and multivariate logistic regression analyses were carried out to examine the factors associated with the dependent variable. The comparison of the participants demonstrated that the education, employment and income levels of the rural women were lower than those of the urban women (p<0.001). The rural women had more pregnancies, miscarriages and stillbirths, and the mortality among their children was higher compared to the urban women (p<0.001). The level of unmet need for family planning in Cumayani village was about twice that of Emek neighbourhood (9.7% v 5.4%). The multivariate analysis was conducted separately for each settlement. Marrying by way of only a religious ceremony increased the level of unmet need for family planning by 4.61 times (95% confidence interval (CI) 1.3-16.1) (p=0.016) in Cumayani. The multivariate analysis of all the women participating in the study revealed that marriage by way of only a religious ceremony increased the level of unmet need by 4.96 times (95%CI 1.4-17.1) (p=0.011). The study showed the effects of socioeconomic and cultural factors on women's fertility behaviours and unmet needs for family planning to favour urban women. Not being married by civil marriage was a significant predictor of unmet need. These findings highlight a need for intervention, particularly for the empowerment of rural women, in order to improve reproductive health outcomes. |
Predicting enteric methane emission of dairy cows with milk Fourier-transform infrared spectra and gas chromatography-based milk fatty acid profiles.
The objective of the present study was to compare the prediction potential of milk Fourier-transform infrared spectroscopy (FTIR) for CH4 emissions of dairy cows with that of gas chromatography (GC)-based milk fatty acids (MFA). Data from 9 experiments with lactating Holstein-Friesian cows, with a total of 30 dietary treatments and 218 observations, were used. Methane emissions were measured for 3 consecutive days in climate respiration chambers and expressed as production (g/d), yield (g/kg of dry matter intake; DMI), and intensity (g/kg of fat- and protein-corrected milk; FPCM). Dry matter intake was 16.3 ± 2.18 kg/d (mean ± standard deviation), FPCM yield was 25.9 ± 5.06 kg/d, CH4 production was 366 ± 53.9 g/d, CH4 yield was 22.5 ± 2.10 g/kg of DMI, and CH4 intensity was 14.4 ± 2.58 g/kg of FPCM. Milk was sampled during the same days and analyzed by GC and by FTIR. Multivariate GC-determined MFA-based and FTIR-based CH4 prediction models were developed, and subsequently, the final CH4 prediction models were evaluated with root mean squared error of prediction and concordance correlation coefficient analysis. Further, we performed a random 10-fold cross validation to calculate the performance parameters of the models (e.g., the coefficient of determination of cross validation). The final GC-determined MFA-based CH4 prediction models estimate CH4 production, yield, and intensity with a root mean squared error of prediction of 35.7 g/d, 1.6 g/kg of DMI, and 1.6 g/kg of FPCM and with a concordance correlation coefficient of 0.72, 0.59, and 0.77, respectively. The final FTIR-based CH4 prediction models estimate CH4 production, yield, and intensity with a root mean squared error of prediction of 43.2 g/d, 1.9 g/kg of DMI, and 1.7 g/kg of FPCM and with a concordance correlation coefficient of 0.52, 0.40, and 0.72, respectively. The GC-determined MFA-based prediction models described a greater part of the observed variation in CH4 emission than did the FTIR-based models. The cross validation results indicate that all CH4 prediction models (both GC-determined MFA-based and FTIR-based models) are robust; the difference between the coefficient of determination and the coefficient of determination of cross validation ranged from 0.01 to 0.07. The results indicate that GC-determined MFA have a greater potential than FTIR spectra to estimate CH4 production, yield, and intensity. Both techniques hold potential but may not yet be ready to predict CH4 emission of dairy cows in practice. Additional CH4 measurements are needed to improve the accuracy and robustness of GC-determined MFA and FTIR spectra for CH4 prediction. |
Which stoma works better for colonic dysmotility in the spinal cord injured patient?
The formation of an intestinal stoma is very effective in the treatment of colonic dysmotility associated with spinal cord injury (SCI). Little is known about the difference in the long-term outcome among left-sided colostomies, right-sided colostomies, and ileostomies in this patient population. The records of 45 SCI patients with intestinal stomas at our institution were reviewed retrospectively. Operative details and preoperative colonic transit times (CTT) were recorded. Patients who were alive and available were interviewed using a questionnaire designed to assess the quality of life (QOL), health status, and time to bowel care before and after stoma formation. Between 1976 and 2002, 45 patients underwent a total of 48 intestinal stomas. A left-sided colostomy (LC) was formed in 21 patients, right-sided colostomy (RC) in 20, and ileostomy (IL) in 7. Three of the patients in the RC group ultimately underwent total abdominal colectomy and ileostomy. The indications for stoma formation and CTT were different in the three groups. Bloating, constipation, chronic abdominal pain, difficulty evacuation with prolonged CTT was the main indication in 95% of patients in the RC group, 43% of patients in the LC group, and 29% in the IL group. Management of complicated decubitus ulcers, perineal and pelvic wounds was the primary indication in 43% of patients in the LC group, 5% in the RC group, and none in the IL group. Preoperative total and right CTTs were longer in the RC group compared with the LC group: 127.5 versus 83.1 hours (P <0.05) and 53.7 versus 28.5 hours (P <0.05), respectively. Eighty-two percent of patients (37 of 45) were interviewed at a mean follow-up of 5.5 years after stoma formation. Most patients who were interviewed were satisfied with their stoma (RC, 88%; LC, 100%; IL, 83%) and the majority would have preferred to have the stoma earlier (RC, 63%; LC, 77%; and IL, 63%). The QOL index significantly improved in all groups (RC, 49 to 79, P <0.05; LC, 50 to 86, P <0.05; and IL, 60 to 82, P <0.05), as well as the health status index (RC, 58 to 83, P <0.05; LC, 63 to 92, P <0.05; IL, 61 to 88, P <0.05). The average daily time to bowel care was significantly shortened in all groups (RC, 102 to 11 minutes, P <0.05; LC, 123 to 18 minutes, P <0.05; and IL, 73 to 13 minutes, P <0.05.). Regardless of the type of stoma, most patients had functional improvement postoperatively. Patients who underwent RC had longer CTT and more chronic symptoms related to colonic dysmotility, reflecting the preoperative selection bias. The successful outcome noted in all groups suggests that preoperative symptoms and CTT studies may have been helpful in optimal choice of stoma site selection. |
Human prostatic cancer cells are sensitive to programmed (apoptotic) death induced by the antiangiogenic agent linomide.
Human prostatic cancer cells have a remarkably low rate of proliferation even when they have metastasized to the bone and have become androgen independent (Berges et al., Clin. Cancer Res., 1:473-480, 1995). Due to this low proliferation, patients with such androgen-independent metastatic prostatic cancer cells are rarely treated successfully with the presently available chemotherapeutic agents. Therefore, new approaches are urgently needed which are not dependent on the rate of cancer cell proliferation for their effectiveness. One such approach is to inhibit the angiogenic response within localized and metastatic cancer deposits, since the resultant hypoxia-induced tumor cell death does not require cell proliferation. We have previously demonstrated that the quinoline-3-carboxamide, linomide, is an p.o. active agent which inhibits tumor angiogenesis and thus blood flow in a variety of rat prostatic cancers independent of their growth rate, androgen sensitivity, or metastatic ability. Because of its antiangiogenic effects, linomide treatment induces the hypoxic death of rat prostatic cancer cells, thus inhibiting their net growth and metastases. To determine whether human prostatic cancer cells are similarly sensitive to hypoxia-induced death caused by linomide inhibition of tumor angiogenesis, androgen-independent TSU and PC-3 human prostatic cancer cells were xenotransplanted into SCID mice that were either untreated or treated p.o. with linomide. These studies demonstrated that linomide treatment decreases microvessel density in both androgen-independent human prostatic cancers. Microvessel density was decreased from 1.8 +/- 0.4% of the total area in control tumors to 1.0 +/- 0.2% in linomide-treated TSU tumors [i.e., a 44% decrease in microvessel density (P < 0.05)]. Similarly, a 56% decrease (P < 0.05) was observed in the microvessel density of PC-3 tumors (i.e., 2.7 +/- 0.8% of the area in control tumor versus 1.2 +/- 0.2% in the linomide-treated tumors). This inhibition of angiogenesis increased cell death in both TSU and PC-3 cancer cells. This is reflected in both an increase in the area of necrosis and an increase in the apoptotic index in non-necrotic areas. In untreated TSU tumors, 40 +/- 2% of tumor volume was necrotic. Linomide treatment increased this necrotic percentage to 59 +/- 2% [i.e., 48% increase (P < 0.05)]. Linomide therapy also increased apoptotic cell death in non-necrotic tumor areas. In the untreated TSU tumors, 2.9 +/- 0.6% of tumor cells were apoptotic in the non-necrotic areas, and in the linomide-treated TSU tumors this percentage increased to 3.6 +/- 0.4% [i.e., 24% increase (P < 0.05)].(ABSTRACT TRUNCATED AT 400 WORDS) |
Thioridazine for dementia.
Neuroleptic drugs are controversial treatments in dementia, with evidence accumulating that they may hasten clinical decline. Despite these concerns, they are commonly prescribed for elderly and demented patients. Thioridazine, a phenothiazine neuroleptic, is one of the most commonly prescribed. It has often been a preferred agent because it is thought to produce relatively less frequent motor side effects. The drug has significant sedative effects, and it is thought that these are the main mechanism of action in calming and controlling the patient. However, pharmacologically, it also has marked anticholinergic properties that could potentially have a detrimental effect on cognitive function. To determine the evidence on which the use of thioridazine in dementia is based in terms of: 1) efficacy in controlling symptoms 2) cognitive outcome for the patient 3) safety The Cochrane Controlled Trials Register and other electronic databases were searched using the terms 'thioridazine', 'melleril', 'dementia' and 'old age'. In addition, Novartis, the pharmaceutical company that developed and markets thioridazine, was approached and asked to release any published or unpublished data they had on file. Unconfounded, single-blind or double-blind, randomised trials were identified in which treatment with thioridazine was administered for more than one dose and compared to an alternative intervention in patients with dementia of any aetiology. Trials in which allocation to treatment or comparator were not truly random, or in which treatment allocation was not concealed were reviewed but are not included in the data analysis. Data were extracted independently by the reviewers (VC, CAK and RJH). For continuous and ordinal variables, the main outcome measures of interest were the final assessment score and the change in score from baseline to the final assessment. The assessment scores were provided by behavioural rating scales, clinical global impression scales, functional assessment scales, psychometric test scores, and frequency and severity of adverse events. Data were pooled where appropriate or possible, and the Peto odds ratio (95%CI) or the weighted mean difference (95%CI) estimated. Where possible, intention to treat data were used. The meta-analysis showed that, compared with placebo, thioridazine reduced anxiety symptoms as evidenced by changes on the Hamilton Anxiety Scale. However, there was no significant effect on clinical global change, and a non-significant trend for higher adverse effects with thioridazine. Compared to diazepam, thioridazine was superior in terms of some anxiety symptoms, with similar adverse effects. Global clinical evaluation scales mostly did not favour either treatment. Compared to chlormethiazole, thioridazine was significantly inferior when assessed on some items of the CAPE and the Crichton Geriatric Behavioural Rating Scales. Thioridazine was also associated with significantly more dizziness. No superiority for thioridazine was shown in comparisons with etoperidone, loxapine or zuclopenthixol. Very limited data are available to support the use of thioridazine in the treatment of dementia. If thioridazine were not currently in widespread clinical use, there would be inadequate evidence to support its introduction. The only positive effect of thioridazine when compared to placebo is the reduction of anxiety. When compared to placebo, other neuroleptics, and other sedatives it has equal or higher rates of adverse effects. Clinicians should be aware that there is no evidence to support the use of thioridazine in dementia, and its use may expose patients to excess side effects. |
[Comparison of tubeless-percutaneous nephrolithotomy and ureteroscopic lithotripsy in treatment of upper-ureteral calculi sized ≥ 1.5 cm].
To compare the efficacy and safety of tubeless percutaneous nephrolithotomy (tubeless-PCNL) and ureteroscopic lithotripsy (URL) in treatment of impacted upper-ureteral calculi ≥ 1.5 cm in size. Patients with ureteral stones sized ≥ 1.5 cm and lodged above the fourth lumbar vertebra who were treated between September 2009 and July 2013 in Peking University People's Hospital were retrospectively analyzed. In the study, 182 patients underwent tubeless-PCNL or URL treatment respectively, and the operation success rates were compared. The duration of operation, intraoperative blood loss(average hemoglobin decrease), complications, mean hospital stay and residual stone rates were also compared. Fifty-four patients underwent tubeless-PCNL treatment,the average stone size was (1.9 ± 0.4) cm,nephrostomy tubes were placed in two patients,and the operation success rate was 96.3%(52/54). In the rest of the 52 patients,and the mean operation time was (30.1 ± 14.8) minutes with an average postoperative hemoglobin decrease of (10.2 ± 6.1) g/L, and the mean hospital stay was (3.0 ± 1.4) days. Only one of the patients had residual fragments (2%). The main complications included minor perirenal hematoma in 1 patient, fever in 2 patients,elevated blood WBC in 11 patients,and analgesics requirement in 3 patients. In the study, 128 patients were treated with URL,the average stone size was (1.7 ± 0.3) cm. 19 procedures failed,and 10 patients were converted to PCNL,extracorporeal shock wave lithotripsy was executed subsequently after double-J stent placement in 5 patients,and migration of calculi or stone fragments happened in 4 patients. The mean operative time was (51.3 ± 25.5) minutes for the remaining 109 patients with a hemoglobin reduction of (5.2 ± 7.2) g/L. The mean hospital stay was (2.9 ± 1.3) days, and residual stones were found in 13 of the 109 patients (11.9%). The main complications included fever in 3 patients, elevated blood WBC in 42 patients, analgesics requirement in 13 patients because of pain in the urethra or flank. The size of the stones between the two group didn't show significant difference,but the success rate of the tubeless-PCNL procedure was significantly higher. Except that hemoglobin decrease was slightly higher in the tubeless-PCNL group,the mean operative time, the rate of residual stones and rate of complications of the tubeless-PCNL group were lower significantly. Treating stones above 4th lumbar vertebra larger than 1.5 cm were challenging. It is difficult to treat these stones with URL because of a high probability to fail, but on the contrary, tubeless-PCNL was more likely to be performed successfully. For surgeons experienced with the PCNL technology, treating stones ≥ 1.5 cm with tubeless-PCNL procedure may turn out to be more efficient and with a higher operation success rate, and the risk of complications was lower without lengthening the postoperative hospital stay. |
Effect of a pediatric trauma response team on emergency department treatment time and mortality of pediatric trauma victims.
Delay in the provision of definitive care for critically injured children may adversely effect outcome. We sought to speed care in the emergency department (ED) for trauma victims by organizing a formal trauma response system. A case-control study of severely injured children, comparing those who received treatment before and after the creation of a formal trauma response team. A tertiary pediatric referral hospital that is a locally designated pediatric trauma center, and also receives trauma victims from a geographically large area of the Western United States. Pediatric trauma victims identified as critically injured (designated as "trauma one") and treated by a hospital trauma response team during the first year of its existence. Control patients were matched with subjects by probability of survival scores, and were chosen from pediatric trauma victims treated at the same hospital during the year preceding the creation of the trauma team. A trauma response team was organized to respond to pediatric trauma victims seen in the ED. The decision to activate the trauma team (designation of patient as "trauma one") is made by the pediatric emergency medicine (PEM) physician before patient arrival in the ED, based on data received from prehospital care providers. Activation results in the notification and immediate travel to the ED of a pediatric surgeon, neurosurgeon, emergency physician, intensivist, pharmacist, radiology technician, phlebotomist, and intensive care unit nurse, and mobilization of an operating room team. Most trauma one patients arrived by helicopter directly from accident scenes. Data recorded included identifying information, diagnosis, time to head computerized tomography, time required for ED treatment, admission Revised Trauma Score, discharge Injury Severity Score, surgical procedures performed, and mortality outcome. Trauma Injury Severity Score methodology was used to calculate the probability of survival and mortality compared with the reference patients of the Major Trauma Outcome Study, by calculation of z score. Patients treated in the ED after trauma team initiation had statistically shorter times from arrival to computerized tomography scanning (27 +/- 2 vs 21 +/- 4 minutes), operating room (63 +/- 16 vs 623 +/- 27 minutes) and total time in the ED (85 +/- 8 vs 821 +/- 9 minutes). Calculation of z score showed that survival for the control group was not different from the reference population (z = -0.8068), although survival for trauma-one patients was significantly better than the reference population (z = 2.102). Before creation of the trauma team, relevant specialists were individually called to the ED for patient evaluation. When a formal trauma response team was organized, time required for ED treatment of severe trauma was decreased, and survival was better than predicted compared with the reference Major Trauma Outcome Study population. |
The role of superoxide anions in the establishment of an interferon-alpha-mediated antiviral state.
It has been suggested that CuZn-superoxide dismutase (CuZnSOD) is required for the establishment of an interferon (IFN)-mediated antiviral state. To investigate this possibility further, a panel of 6 stably transfected HeLa clones, expressing CuZnSOD activity from 1.6 to 7.3 times the normal level, were treated with different concentrations of recombinant human interferon alpha A (rHuIFN-alpha A) followed by challenge with vesicular stomatitis virus (VSV). A biphasic response curve was generated (r = 0.87, p less than 0.025). Clones with up to 3-fold basal level CuZnSOD activity exhibited an inverse relationship between their ability to generate an IFN-alpha-mediated antiviral state and CuZnSOD activity: the higher the CuZnSOD activity, the lower the sensitivity to IFN-alpha and the more IFN-alpha required for antiviral defense. Clones with between 4 to 7.3 times higher CuZnSOD activity than the non-transfected HeLa control showed a direct relationship between the CuZnSOD activity and the sensitivity to IFN-alpha. Furthermore, in agreement with the results obtained with the SOD1-transfected HeLa cells with up to 3 times the basal SOD activity, fetal fibroblasts derived from SOD1-transgenic mouse strains, TgHS-229 and TgHS-218, which also express 3 times the basal CuZnSOD activity, required higher IFN-alpha to achieve 50% protection. These results suggest a possible role for superoxide anion in the establishment of IFN-mediated antiviral effect, especially in the dose-response region in which the inverse relationship between the generation of the IFN-alpha-mediated antiviral state and CuZnSOD activity was observed. To assess this possibility, allopurinol was used as a xanthine oxidase inhibitor and hydroxyl radical scavenger in the IFN-alpha-mediated antiviral assay. Addition of 3 mM allopurinol diminished the IFN-mediated antiviral effect by between 40 and 50% (p less than 0.01), and there was a reduction in superoxide generation (p less than 0.05). The degree of reduction caused by allopurinol treatment was higher at an IFN-alpha concentration of 10 U/ml than at 100 U/ml, and there was no correlation between CuZnSOD activity and the degree of reduction. To establish further the role of superoxide as an antiviral agent, paraquat was used as a superoxide generator in the absence of IFN-alpha in the antiviral assay. Although paraquat at high concentrations is toxic to the cells, it actually showed a protective effect against VSV infection, and an inverse relationship (r = 0.79, r less than 0.025) between cell survival and CuZnSOD activity was observed with 150 mM paraquat treatment.(ABSTRACT TRUNCATED AT 400 WORDS) |
Leaching and transport of PFAS from aqueous film-forming foam (AFFF) in the unsaturated soil at a firefighting training facility under cold climatic conditions.
The contaminant situation at a Norwegian firefighting training facility (FTF) was investigated 15 years after the use of perfluorooctanesulfonic acid (PFOS) based aqueous film forming foams (AFFF) products had ceased. Detailed mapping of the soil and groundwater at the FTF field site in 2016, revealed high concentrations of per- and polyfluoroalkyl substances (PFAS). PFOS accounted for 96% of the total PFAS concentration in the soil with concentrations ranging from <0.3 μg/kg to 6500 μg/kg. The average concentration of PFOS in the groundwater down-gradient of the site was 22 μg/l (6.5-44.4 μg/l), accounting for 71% of the total PFAS concentration. To get a better understanding of the historic fate of AFFF used at the site, unsaturated column studies were performed with pristine soil with a similar texture and mineralogy as found at the FTF and the same PFOS containing AFFF used at the site. Transport and attenuation processes governing PFAS behavior were studied with focus on cold climate conditions and infiltration during snow melting, the main groundwater recharge process at the FTF. Low and high water infiltration rates of respectively 4.9 and 9.7 mm/day were applied for 14 and 7 weeks, thereby applying the same amount of water, but changing the aqueous saturation of the soil columns. The low infiltration rate represented 2 years of snow melting, while the high infiltration rate can be considered to mimic the extra water added in the areas with intensive firefighting training. In the low infiltration experiment PFOS was not detected in the column leachate over the complete 14 weeks. With high infiltration PFOS was detected after 14 days and concentrations increased from 20 ng/l to 2200 ng/l at the end of the experiment (49 days). Soil was extracted from the columns in 5 cm layers and showed PFOS concentrations in the range < 0.21-1700 μg/kg in the low infiltration column. A clear maximum was observed at a soil depth of 30 cm. No PFOS was detected below 60 cm depth. In the high infiltration column PFOS concentration ranged from 7.4 to 1000 μg/kg, with highest concentrations found at 22-32 cm depth. In this case PFOS was detected down to the deepest sample (~90 cm). Based on the field study, retardation factors for the average vertical transport of PFOS in the unsaturated zone were estimated to be 33-42 and 16-21 for the areas with a low and high AFFF impact, respectively. The estimated retardation factors for the column experiments were much lower at 6.5 and 5.8 for low and high infiltration, respectively. This study showed that PFOS is strongly attenuated in the unsaturated zone and mobility is dependent on infiltration rate. The results also suggest that the attenuation rate increases with time. |
Influence of high-dose intraoperative remifentanil with or without amantadine on postoperative pain intensity and morphine consumption in major abdominal surgery patients: a randomised trial.
Human volunteer studies demonstrate ketamine-reversible opioid-induced hyperalgesia, consistent with reports of increased postoperative pain and analgesic consumption. However, recent clinical trials showed controversial results after intraoperative administration of high-dose remifentanil. To investigate in lower abdominal surgery patients whether postoperative pain intensity and analgesic consumption are increased following intraoperative high-dose vs. low-dose remifentanil, and whether this could be prevented by preoperative administration of the NMDA antagonist amantadine. Randomised, placebo-controlled, clinical study. University hospital. Sixty patients scheduled for elective major lower abdominal surgery. Patients were randomly assigned to one of three anaesthetic regimens. First, in the group 'low-dose remifentanil and preoperative isotonic saline' (n=15), a remifentanil infusion was maintained at a rate of 0.1 μg kg min throughout anaesthesia, and the end-tidal concentration of sevoflurane started at 0.5 minimum alveolar concentration (MAC) and was increased by 0.2% increments according to clinical demand. Preoperatively, 500 ml NaCl 0.9% were infused as study solution. Second, in the group 'high-dose remifentanil and preoperative saline' (n=17), the end-tidal concentration of sevoflurane was maintained at 0.5 MAC throughout anaesthesia. A remifentanil infusion was started at a rate of 0.2 μg kg min and subsequently increased by 0.05 μg kg min increments to clinical demand. Preoperatively, these patients also received a solution of 500 ml NaCl 0.9% as study solution. Third, the group 'high-dose remifentanil and preoperative amantadine' (n=16) received the same anaesthetic protocol as the second group, but the preoperative study solution was substituted by amantadine (200 mg/500 ml). Pain intensity measured by the numerical rating scale and cumulative morphine consumption. The remifentanil dose in both high-dose groups was significantly higher compared with the low-dose remifentanil group (0.20±0.04 and 0.23±0.02 vs. 0.08±0.04 μg kg min; P<0.001). Pain intensity gradually increased up to 45 min postoperatively in all groups, and then decreased again towards low levels in parallel with a linear increase in morphine consumption. Postoperative pain intensity and morphine consumption did not significantly differ between groups. Moreover, preoperative amantadine revealed no additional benefit. We were not able to demonstrate any influence on routine clinical outcome parameters of pain after high-dose remifentanil. Although not without limitations, these findings are in line with other clinical trials that could not detect an opioid-induced impact on postoperative pain parameters, which might be less sensitive to detect opioid-induced hyperalgesia compared with quantitative sensory testing. DRKS00004626. |
[HPV-Hr detection by home self sampling in women not compliant with pap test for cervical cancer screening. Results of a pilot programme in Bouches-du-Rhône].
The non-participation to cervical screening is the major determinant in the risk of mortality due to cervical cancer. In France, around 40% of women do not participate to regular screening. The cultural or economic barriers for performing screening by Pap test are numerous; one of the most frequent is the refusal of gynaecological examination. A persistent HPV(HR) infection is a necessary factor for developing cervical cancer. The HPV(HR) testing has a high sensibility to detect high grade cervical intra-epithelial neoplasia (CIN 2-3) and a satisfactory specificity after 30-35 years old. The principal objective of this study was to compare the participation rates in women 35-69 years old who did not perform a Pap test after a first individual invitation, either when an HPV(HR) auto-test was offered to be performed at home or a second invitation to Pap test was sent. We also evaluated the quality of the two tests, the positive results obtained by age groups and the following histological type of lesions diagnosed in the women with positive results. The study included 9,334 women, 35-69 years old, who did not realized a Pap-test during the 2 previous years and who did not respond at a first individual invitation. These non-responders were randomized into two groups: one group (n=4,934) received a second individual invitation and the other (n=4,400) an offer of receiving and performing an HPV auto-test at home. In women 35-69 years the participation to the second invitation to Pap test was significantly lower (7.2%) than the participation to auto-test (26.4%) with P<0.001. The quality of the two tests was satisfactory; the auto-test was not altered by the postage to laboratory (non interpretable rate=1.4% [CI at 95%=0.65%; 2.15%]. From the 311 Pap tests done, 5.5% (17) were classified "abnormal" (nine ASCUS, one high grade and seven low grades). The follow up of 13 women out of 17 confirmed the diagnosis for 1 case of CIN2 and 2 cases of CIN3, 4 women are lost of follow up after 6 months. From the 939 HPV(HR) tests done, 6.2% (58) were positive. Such positivity rate was not influenced by age. Out of the 58 positive HPV(HR) cases, 27 only were of the 16 genotype (46.5% [CI 95%=33.7%; 59.3%]). This law rate is a consequence of an inversion of the ratio HPV 16 versus other types in women 60 years old and over. In this group, the follow-up of 36 women diagnosed five cases of CIN1, one of CIN2 and four of CIN3; 22 patients are lost of follow up at 6 months. Globally, in the studied population, an individual recall for pap test allowed to diagnose and treat 3 high grade lesions (7‰) and the dispatching of an auto test allowed the diagnosis and treatment of five high grade lesions (1,4‰), this difference is significant (P=0.02; OR=0.25 [0.05; 0.97]). The HPV(HR) auto-test seems to be better accepted than the Pap test in the 35-69 years old women previously non-responders to individual invitation, and the quality of the test is satisfactory. Such a test can be proposed to the 35-69 years old non-participant to Pap test to increase the coverage for cervical screening, if the rates of diagnostic examinations performed in case of an HPV(HR) positive is sufficiently high. |
Observations of environmental changes and potential dietary impacts in two communities in Nunavut, Canada.
Inuit from communities across the Arctic are still existing in subsistence living. Hunting, fishing and gathering is an important part of the culture and the harvested 'country food' provides sources of nutrients invaluable to maintaining the health of the populations. However, Inuit are voicing their concerns on how observed climate change is impacting on their traditional life. The objective of this study was to report on observed climate changes and how they affect the country food harvest in two communities in the Canadian Arctic. The nutritional implications of these changes are discussed and also how the communities need to plan for adaptations. A total of 17 adult participants from Repulse Bay and Kugaaruk, Nunavut were invited to participate. Participants were selected using purposeful sampling methods selecting the most knowledgeable community members for the study. Inuit Elders, hunters, processors of the animals, and other community members above the age of 18 years were selected for their knowledge of harvesting and the environment. Two-day bilingual focus groups using semi-directed, unstructured questions were held in each community to discuss perceived climate changes related to the access and availability of key species. Key topics of focus included ice, snow, weather, marine mammals, land mammals, fish, species ranges, migration patterns, and quality and quantity of animal populations. Maps were used to pinpoint harvesting locations. A qualitative analysis categorizing strategy was used for analysis of data. This strategy involves coding data in order to form themes and to allow for cross-comparison analysis between communities. Each major animal represented a category; other categories included land, sea, and weather. Results were verified by the participants and community leaders. Three themes emerged from the observations: (1) ice/snow/water; (2) weather; and (3) changes in species. Climate change can affect the accessibility and availability of the key species of country foods including caribou, marine mammals, fish, birds and plants. Various observations on relationship between weather and population health and distributions of the animal/plant species were reported. While many of the observations were common between the two communities, many were community specific and inconsistent. Participants from both communities found that climate change was affecting the country food harvest in both positive and negative ways. Key nutrients that could be affected are protein, iron, zinc, n-3 fatty acids, selenium and vitamins D and A. Community members from Repulse Bay and Kugaaruk have confirmed that climate change is affecting their traditional food system. Local and regional efforts are needed to plan for food security and health promotion in the region, and global actions are needed to slow down the process of climate change. |
The global impact of non-communicable diseases on healthcare spending and national income: a systematic review.
The impact of non-communicable diseases (NCDs) in populations extends beyond ill-health and mortality with large financial consequences. To systematically review and meta-analyze studies evaluating the impact of NCDs (including coronary heart disease, stroke, type 2 diabetes mellitus, cancer (lung, colon, cervical and breast), chronic obstructive pulmonary disease and chronic kidney disease) at the macro-economic level: healthcare spending and national income. Medical databases (Medline, Embase and Google Scholar) up to November 6th 2014. For further identification of suitable studies, we searched reference lists of included studies and contacted experts in the field. We included randomized controlled trials, systematic reviews, cohorts, case-control, cross-sectional, modeling and ecological studies carried out in adults assessing the economic consequences of NCDs on healthcare spending and national income without language restrictions. All abstracts and full text selection was done by two independent reviewers. Any disagreements were resolved through consensus or consultation of a third reviewer. Data were extracted by two independent reviewers using a pre-designed data collection form. Studies evaluating the impact of at least one of the selected NCDs on at least one of the following outcome measures: healthcare expenditure, national income, hospital spending, gross domestic product (GDP), gross national product, net national income, adjusted national income, total costs, direct costs, indirect costs, inpatient costs, outpatient costs, per capita healthcare spending, aggregate economic outcome, capital loss in production levels in a country, economic growth, GDP per capita (per capita income), percentage change in GDP, intensive growth, extensive growth, employment, direct governmental expenditure and non-governmental expenditure. From 4,364 references, 153 studies met our inclusion criteria. Most of the studies were focused on healthcare related costs of NCDs. 30 studies reported the economic impact of NCDs on healthcare budgets and 13 on national income. Healthcare expenditure for cardiovascular disease (12-16.5 %) was the highest; other NCDs ranged between 0.7 and 7.4 %. NCD-related health costs vary across the countries, regions, and according to type of NCD. Additionally, there is an increase in costs with increased severity and years lived with the disease. Low- and middle-income (LMI) countries were the focus of just 16 papers, which suggests an information shortage concerning the true economic burden of NCDs in these countries. NCDs pose a significant financial burden on healthcare budgets and nations' welfare, which is likely to increase over time. However further work is required to standardize more consistently the methods available to assess the economic impact of NCDs and to involve (hitherto under-addressed) LMI populations across the globe. |
Conformational characteristics of peptides and unanticipated results from crystal structure analyses.
Preferred conformation and types of molecular folding are some of the topics that can be addressed by structure analysis using x-ray diffraction of single crystals. The conformations of small linear peptide molecules with 2-6 residues are affected by polarity of solvent, presence of water molecules, hydrogen bonding with neighboring molecules, and other packing forces. Larger peptides, both cyclic and linear, have many intramolecular hydrogen bonds, the effect of which outweighs any intermolecular attractions. Numerous polymorphs of decapeptides grown from a variety of solvents, with different cocrystallized solvents, show a constant conformation for each peptide. Large conformational changes occur, however, upon complexation with metal ions. A new form of free valinomycin grown from DMSO exhibits near three-fold symmetry with only three intramolecular hydrogen bonds. The peptide is in the form of a shallow bowl with a hydrophobic exterior. Near the bottom of the interior of the bowl are three carbonyl oxygens, spaced and directed so that they are in position to form three ligands to a K+, e.g., complexation can be completed by the three lobes containing the beta-bends closing over and encapsulating the K+ ion. In another example, free antamanide and the biologically inactive perhydro analogue, in which four phenyl groups become cyclic hexyl groups, have essentially the same folding of backbone and side chains. The conformation changes drastically upon complexation with Li+ or Na+. However, the metal ion complex of natural antamanide has a hydrophobic globlar form whereas the metal ion complex of the inactive perhydro analogue has a polar band around the middle. The structure results indicate that the antamanide molecule is in a complexed form during its biological activity. Single crystal x-ray diffraction structure analyses have identified the manner in which water molecules are essential to creating minipolar areas on apolar helices. Completely apolar peptides, such as membrane-active peptides, can acquire amphiphilic character by insertion of a water molecule into the helical backbone of Boc-Aib-Ala-Leu-Aib-Ala-Leu-Aib-Ala-Leu-Aib-OMe, for example. The C-terminal half assumes an alpha-helix conformation, whereas the N-terminal half is distorted by an insertion of a water molecule W(1) between N(Ala5) and O(Ala2), forming hydrogen bonds N(5)H...W(1) and W(1)...O(2). The distortion of the helix exposes C = O(Aib1) and C = O(Aib4) to the outside environment with the consequence of attracting additional water molecules. The leucyl side chains are on the other side of the molecule. Thus a helix with an apolar sequence can mimic an amphiphilic helix. |
Mode and timing of body pattern formation (Regionalization) in the early embryonic development of cyclorrhaphic dipterans (Protophormia, Drosophila).
1. Eggs of the blowflyProtophormia spec. were separated into anterior and posterior fragments of varying sizes. The operations were carried out between oviposition and the blastoderm stage. The partial larvae produced by the fragments were scored for the cuticular pattern they had formed. 2. The cuticle of the 1st instar larva carries 11 denticle belts which correspond to the anterior borders of the thoracic and abdominal body segments. These are considered the elements of a linear longitudinal pattern which starts with the head region. 3. Egg fragments of the sizes studied did not produce the complete cuticular pattern. 4. If denticle belts were present on the partial larvae formed in egg fragments, these always included the corresponding terminal pattern element (no. 1 in anterior, no. 11 in posterior fragments). Bigger partial patterns from anterior fragments may have any belt up to no. 10 as their most posterior belt, posterior partial patterns may start anteriorly with any belt up to no. 1, i.e. behind the head region. 5. After fragmentation during early stages of development, all eggs fail to form some pattern elements. Fragmentation thus causes a gap in the pattern. Extent and position within the pattern of this gap depend on level and stage of fragmentation. 6. With increasing egg age (developmental stage) at fragmentation, the gap in the cuticular pattern becomes progressively smaller. Eggs fragmented during or after formation of the blastodermal cell walls as a rule form all pattern elements. 7. The progressive reduction of the gap in the cuticular pattern is due to formation of bigger sets of pattern elements inboth partner fragments. I.e. on the average an anterior or posterior fragment of given size will produce more pattern elements if separated from the rest of the egg at a later stage than if separated early. 8. In order to produce a given set of pattern elements, a fragment needs to be bigger on the average when separated early than when separated later on. This applies to both anteriorand posterior fragments of the fragmentation levels studied. 9. According to these results, the egg ofProtophormia cannot be considered a mosaic of determinants for the different pattern elements at oviposition. The developmental fate of at least the more equatorial egg regions appears to become specified epigenetically during the period between oviposition and blastoderm formation. 10. Once the egg has become subdivided into blastoderm cells, it reacts as a developmental mosaic with respect to the pattern studied. 11. Preliminary results inDrosophila are compatible with these conclusions. 12. The results are compared to those obtained from other insect groups, and formal models for their interpretation are discussed. Pattern specification by interaction of terminal egg regions can be considered the common denominator for a number of egg types. 13. The results demonstrate that formally comparable processes of pattern formation occur in different insect egg types at different stages of development. |
Ropinirole for levodopa-induced complications in Parkinson's disease.
Long-term levodopa therapy for Parkinson's disease is complicated by the development of motor fluctuations and abnormal involuntary movements. One approach is to add a dopamine agonist at this stage of the disease to reduce the time the patient spends immobile or off and to reduce the dose of levodopa in the hope of reducing such problems in the future. To compare the efficacy and safety of adjuvant ropinirole therapy versus placebo in patients with Parkinson's disease already established on levodopa therapy and suffering from motor complications. Electronic searches of MEDLINE, EMBASE and the Cochrane Controlled Trials Register. Handsearching of the neurology literature as part of the Cochrane Movement Disorders Group's strategy. Examination of the reference lists of identified studies and other reviews. Contact with SmithKline Beecham. Randomised controlled trials of ropinirole versus placebo in patients with a clinical diagnosis of idiopathic Parkinson's disease and long-term complications of levodopa therapy. Data was abstracted independently by the authors and differences settled by discussion. The outcome measures used included Parkinson's disease rating scales, levodopa dosage, 'off' time measurements and the frequency of withdrawals and adverse events. Three double-blind, parallel group, randomised, controlled trials have been conducted on 263 patients. The two phase II studies were relatively small, were conducted over the short term (12 weeks), and used relatively low doses of ropinirole (mean administered doses 3.3 and 3.5 mg/d) in a twice daily regime. In view of this clinical heterogeneity and some statistical heterogeneity, the results of these trials have not been included in a meta-analysis. The conclusions of this review are based on the evidence from a single phase III study which was medium term (26 weeks) and used ropinirole doses in line with the current UK licensed maximum in a thrice daily regime. In view of difficulties in assessing changes in off time in ~~ Leiberman 98~~, caused by the initial imbalance between the arms of the trial, it is unsafe to draw any firm conclusion about the effect of ropinirole on off time. However, as an adverse event, dyskinesia was significantly increased in those who received ropinirole (~~ Leiberman 98~~; odds ratio 2.90; 1.36, 6.19 95% CI; Table 8). Measurements of motor impairments and disability were poor in this study with incomplete information available. Levodopa dose could be reduced in ~~ Leiberman 98~~ with a significantly larger reduction on ropinirole than on placebo (weighted mean difference 180 mg/d; 106, 253 95% CI; Table 2). No significant differences in the frequency of adverse event reports were noted between ropinirole and placebo apart from the increase in dyskinesia with ropinirole. There was a trend towards fewer withdrawals from ropinirole in ~~ Leiberman 98~~ but this did not reach statistical significance. Ropinirole therapy can reduce levodopa dose but at the expense of increased dyskinetic adverse events. No clear effect on off time reduction was found but this may have been due to the under-powering of the single evaluable trial. Inadequate data on motor impairments and disability was collected to assess these outcomes. These conclusions apply to short and medium term treatment, up to 26 weeks. Further longer term trials are required, with measurements of effectiveness, and also studies to compare the newer with the older dopamine agonists. |
The TANF/SSI connection.
Interactions and overlap of social assistance programs across clients interest policymakers because such interactions affect both the clients' well-being and the programs' efficiency. This article investigates the connections between Supplemental Security Income (SSI) and Temporary Assistance for Needy Families (TANF) and TANF's predecessor, the Aid to Families with Dependent Children (AFDC) program. Connections between receipt of TANF and SSI are widely discussed in both disability policy and poverty research literatures because many families receiving TANF report disabilities. For both states and the individuals involved, it is generally financially advantageous for adults and children with disabilities to transfer from TANF to SSI. States gain because the federal government pays for the SSI benefit, and states can then use the TANF savings for other purposes. The families gain because the SSI benefits they acquire are greater than the TANF benefits they lose. The payoff to states from transferring welfare recipients to SSI was substantially increased when Congress replaced AFDC with TANF in 1996. States retained less than half of any savings achieved through such transfers under AFDC, but they retain all of the savings under TANF. Also, the work participation requirements under TANF have obligated states to address the work support needs of adults with disabilities who remain in TANF, and states can avoid these costs if adults have disabilities that satisfy SSI eligibility requirements. The incentive for TANF recipients to apply for SSI has increased over time as inflation has caused real TANF benefits to fall relative to payments received by SSI recipients. Trends in the financial incentives for transfer to SSI have not been studied in detail, and reliable general data on the extent of the interaction between TANF and SSI are scarce. In addition, some estimates of the prevalence of TANF receipt among SSI awardees are flawed because they fail to include adults receiving benefits in TANF-related Separate State Programs (SSPs). SSPs are assistance programs that are administered by TANF agencies but are paid for wholly from state funds. When the programs are conducted in a manner consistent with federal regulations, the money states spend on SSPs counts toward federal maintenance-of-effort (MOE) requirements, under which states must sustain a certain level of contribution to the costs of TANF and approved related activities. SSPs are used for a variety of purposes, including support of families who are in the process of applying for SSI. Until very recently, families receiving cash benefits through SSPs were not subject to TANF's work participation requirements. This article contributes to analysis of the interaction between TANF and SSI by evaluating the financial consequences of TANF-to-SSI transfer and developing new estimates of both the prevalence of receipt of SSI benefits among families receiving cash assistance from TANF and the proportion of new SSI awards that go to adults and children residing in families receiving TANF or TANF-related benefits in SSPs. Using data from the Urban Institute's Welfare Rules Database, we find that by 2003 an SSI award for a child in a three-person family dependent on TANF increased family income by 103.5 percent on average across states; an award to the adult in such a family increased income by 115.4 percent. The gain from both child and adult transfers increased by about 6 percent between 1996 (the eve of the welfare reform that produced TANF) and 2003. Using data from the Department of Health and Human Services' TANF/SSP Recipient Family Characteristics Survey, we estimate that 16 percent of families receiving TANF/SSP support in federal fiscal year 2003 included an adult or child SSI recipient. This proportion has increased slightly since fiscal year 2000. The Social Security Administration's current procedures for tabulating characteristics of new SSI awardees do not recognize SSP receipt as TANF We use differences in reported TANF-to-SSI flows between states with and without Separate State Programs to estimate the understatement of the prevalence of TANF-related SSI awards in states with SSPs. The results indicate that the absolute number of awards to AFDC (and subsequently) TANF/SSP recipients has declined by 42 percent for children and 25 percent for adults since the early 1990s. This result is a product of the decline in welfare caseloads. However, the monthly incidence of such awards has gone up-from less than 1 per 1,000 child recipients in calendar years 1991-1993 to 1.3 per 1,000 in 2001-2003 and, for adult recipients, from 1.6 per 1,000 in 1991-1993 to 4 per 1,000 in 2001-2003. From these results we conclude that a significant proportion of each year's SSI awards to disabled nonelderly people go to TANF/SSP recipients, and many families that receive TANF/SSP support include adults, children, or both who receive SSI. Given the Social Security Administration's efforts to improve eligibility assessment for applicants, to ensure timely access to SSI benefits for those who qualify, and to improve prospects for eventual employment of the disabled, there is definitely a basis for working with TANF authorities both nationally and locally on service coordination and on smoothing the process of SSI eligibility assessment. The Deficit Reduction Act of 2005 reauthorized TANF through fiscal year 2010, but with some rules changes that are important in light of the analysis presented in this article. The new law substantially increases effective federal requirements for work participation by adult TANF recipients and mandates that adults in Separate State Programs be included in participation requirements beginning in fiscal year 2007. Thus SSPs will no longer provide a means for exempting from work requirements families that are in the process of applying for SSI, and the increased emphasis on work participation could result in more SSI applications from adult TANF recipients. |
Alterations in the Neurobehavioral Phenotype and ZnT3/CB-D28k Expression in the Cerebral Cortex Following Lithium-Pilocarpine-Induced Status Epilepticus: the Ameliorative Effect of Leptin.
Zinc transporter 3 (ZnT3)-dependent "zincergic" vesicular zinc accounts for approximately 20% of the total zinc content of the mammalian telencephalon. Elevated hippocampal ZnT3 expression is acknowledged to be associated with mossy fiber sprouting and cognitive deficits. However, no studies have compared the long-term neurobehavioral phenotype with the expression of ZnT3 in the cerebral cortex following status epilepticus (SE). The aim of this study was to investigate changes in the long-term neurobehavioral phenotype as well as the expression of ZnT3 and calcium homeostasis-related CB-D28k in the cerebral cortex of rats subjected to neonatal SE and to determine the effects of leptin treatment immediately after neonatal SE. Fifty Sprague-Dawley rats (postnatal day 6, P6) were randomly assigned to two groups: the pilocarpine hydrochloride-induced status epilepticus group (RS, n = 30) and control group (n = 20). Rats were further divided into the control group without leptin (Control), control-plus-leptin treatment group (Leptin), RS group without leptin treatment (RS), and RS-plus-leptin treatment group (RS + Leptin). On P6, all rats in the RS group and RS + Leptin group were injected intraperitoneally (i.p.) with lithium chloride (5 mEq/kg). Pilocarpine (320 mg/kg, i.p.) was administered 30 min after the scopolamine methyl chloride (1 mg/kg) injection on P7. From P8 to P14, animals of the Leptin group and RS + Leptin group were given leptin (4 mg/kg/day, i.p.). The neurological behavioral parameters (negative geotaxis reaction reflex, righting reflex, cliff avoidance reflex, forelimb suspension reflex, and open field test) were observed from P23 to P30. The protein levels of ZnT3 and CB-D28k in the cerebral cortex were detected subsequently by the western blot method. Pilocarpine-treated neonatal rats showed long-term abnormal neurobehavioral parameters. In parallel, there was a significantly downregulated protein level of CB-D28k and upregulated protein level of ZnT3 in the cerebral cortex of the RS group. Leptin treatment soon after epilepticus for 7 consecutive days counteracted these abnormal changes. Taken together with the results from our previous reports on another neonatal seizure model, which showed a significant positive inter-relationship between ZnT3 and calcium/calmodulin-dependent protein kinase IIα (CaMKIIα), the data here suggest that ZnT3/CB-D28k-associated Zn (2+)/Ca(2+) signaling might be involved in neonatal SE-induced long-term brain damage in the aspects of neurobehavioral impairment. Moreover, consecutive leptin treatment is effect at counteracting these hyperexcitability-related changes, suggesting a potential clinical significance. |
Video recording as a means of evaluating neonatal resuscitation performance.
To determine the compliance to Neonatal Resuscitation Program (NRP) guidelines in our institution, by the use of videotaped newborn resuscitations. NRP is the standard of care for newborn resuscitation. The application of NRP guidelines and resuscitation skills in actual clinical settings is undocumented. A video recorder, mounted to the radiant warmer in the main obstetrical operating room, was used to record all high-risk resuscitations. All members of the resuscitation team were NRP-certified. The videotapes were reviewed within 14 days of the resuscitation and then erased. This ongoing review was approved as a quality assurance (QA) project ensuring confidentiality under California law. The first 100 resuscitations were evaluated to assess NRP compliance. Each step in the resuscitation (positioning, oxygen delivery, ventilation, chest compressions, intubation, and medication) was graded. A score was devised, with 2 points being awarded for every correct decision and proper procedure, 1 point for delayed interventions or inadequate technique, and zero points for indicated procedures that were omitted or for interventions that were not indicated. The total points were divided by the total possible points for that patient. The scores for the first 25 resuscitations (group 1) and the last 25 resuscitations (group 2) were compared. Fifty-four percent of the 100 resuscitations had deviations from the NRP guidelines. Ten percent received overly aggressive stimulation and 22% had poor suction technique. Of the 78 infants given oxygen, this decision was considered incorrect in 15% and the delivery technique was poor in 10% of the infants given oxygen. Of those requiring mask ventilation (n = 18), 24% had poor chest expansion, 11% used an incorrect rate, and 17% had inadequate reevaluation. Twelve infants were intubated; only 7 were successfully intubated on the first attempt and only 4 were intubated in <20 seconds. The longest intubation attempt was 50 seconds. Naloxone was given to 2 patients. One was breathing spontaneously with a heart rate >100. Resuscitations receiving a perfect evaluation score were more likely to occur in infants needing less intervention. The level of resuscitation required for groups 1 and 2 were statistically similar. There was no difference in resuscitation scores between the 2 groups. Only the inappropriate use of deep suctioning improved, with 8 of 25 events in group 1, and 0 of 25 in group 2. We have found a significant number of deviations from the NRP guidelines. Video recording of actual clinical practice is a useful QA tool for monitoring the conduct of newborn resuscitation. We are now conducting repeat video assessments of individual NRP providers to determine whether there is improved performance. |