Yates Continuity Correction

9 A χ2 test with a Yates correction was employed to test for nonrandom statistically significant changes in the LOE during the study period.

From: Journal of Vascular Surgery , 2018

DESIGN, MEASUREMENT, AND ANALYSIS OF CLINICAL INVESTIGATIONS

Edward H. Giannini , in Textbook of Pediatric Rheumatology (Fifth Edition), 2005

Continuity Correction of Yates

If the total N for a 2 × 2 chi-square table is less than about 40, the Yates continuity correction is used to compensate for deviations from the theoretical (smooth) probability distribution. The resulting chi-square value is smaller and the resulting statistical inference is more conservative. The technique involves subtracting 1/2 from the absolute value of each O ij−Eij. Mathematically, this is stated as follows:

χ 2 = [ ( | O ij - E ij | - ½ ) 2 / E ij ]

or

χ 2 = ( | ad - bc | - N / 2 ) 2 ÷ ( a + b ) ( c + d ) ( a + c ) ( b + d )

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9781416002468500127

Statistical Testing, Risks, and Odds in Medical Decisions

ROBERT H. RIFFENBURGH , in Statistics in Medicine (Second Edition), 2006

YATES' CORRECTION

The reader should note that sometimes the chi-square statistic is calculated as the sum of [(∣observed − expected∣ − 0.5)2/expected], where the 0.5 term, called Yates' correction, is subtracted to adjust for the counts being restricted to integers. It was used previously to provide a more conservative result for contingency tables with small cell counts. Currently, Fisher's exact test provides a better solution to dealing with small cell counts and is preferred. For larger cell counts, Yates' correction alters the result negligibly and may be ignored. Thus, the chi-square form Eq. (6.2) is used in this book.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780120887705500459

Categorical and Cross-Classified Data: Goodness of Fit and Association

Julien I.E. Hoffman , in Basic Biostatistics for Medical and Biomedical Practitioners (Second Edition), 2019

Practical Matters

1.

Because the expected values are usually not whole numbers, calculate them and the chi-square to three decimal places to minimize rounding off errors.

2.

For 2   ×   2 tables, many statisticians use Yates' correction for continuity; that is, decrease the absolute size of the deviation by 0.5. Table 14.5a should therefore be:

Alive Dead Total
Treatment A 60 65 40 35 100
  4.5 20.25 4.5 20.25
0.312 0.579
Treatment B 70 65 30 35 100
4.5 20.25   4.5 20.25
0.312 0.579
Total 130 70 200

Observed numbers in enlarged bold type, χ 2 in italics.

χ 2 T   =   1.78, 1 d.f., P  =   0.18.

This correction has made the total chi-square smaller, so that the null hypothesis is even less likely to be rejected.

3.

None of the expected frequencies should be too small. As a rule of thumb, Cochran suggested that 80% of the cells should have expected frequencies >   5 and that none should be below 1. A very small expected value could lead to a big squared deviation that, divided by the small expected value, gives a very large contribution to the χ 2 T . This tends to inflate the χ 2 T and we should hesitate to accept a conclusion based on a single large value of χ 2. In addition, a very small expected value makes the theoretical basis for using the χ 2 table suspect. It would be better to use Fisher's exact test (see later). If the expected value is <   5, we can use the chi-square technique, but should be cautious about interpreting the results. If >   20% of the cells (in larger contingency tables) have expected values <   5, either combine adjacent rows or columns to increase the size of the expected numbers (if that makes sense) or do not use the chi-square test. These criteria are frequently used, but not all statisticians agree with them.

4.

Problems about whether to use Yates' correction or about too small an expected value can be dealt with by using Fisher's exact test. Computer programs can calculate the probability by this test for any sample size, so that this test may be preferred for any 2   ×   2 table.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128170847000140

Categorical and Cross-Classified Data

Julien I.E. Hoffman , in Biostatistics for Medical and Biomedical Practitioners, 2015

Goodness of Fit

Often we wish to compare a distribution of counts in different categories with some theoretical distribution. For example, in a classical genetics experiment (Mendel, 1965) tall plants are crossed with short plants to provide an F1 generation, and these are crossed with others of the F1 generation to provide a second (F2) generation. Counts of 120 plants of the F2 generation are listed in Table 14.2a.

Table 14.2a. Hypothetical Mendelian experiment

Tall Short Total
Observed counts 94 26 120

In classical Mendelian genetics with the phenotype determined by dominant and recessive alleles, there should be a 3:1 ratio in which only one-quarter of the F2 generation have two recessive genes and are short, whereas three-quarters of them have at least one dominant gene and are tall. Is the result of F2 crosses in our series of 120 plants consistent with the hypothesis that we are dealing with classical Mendelian inheritance? The observed ratio of Tall:Short is 3.62:1 which differs from 3:1, but if the 3:1 ratio is true in the population, the relatively small sample might by chance have a ratio as discrepant as 3.62:1.

To analyze this, adopt the null hypothesis that the sample is drawn from a population in which the Tall:Short ratio is 3:1. If this is true, then a sample of 120 F2 crosses is expected to provide 0.75   ×   120   =   90 tall plants and 0.25   ×   120   =   30 short plants (Table 14.2b).

Table 14.2b. Chi-square analysis of Mendelian experiment

Tall Short Total
Observed (O) 94 26 120
Expected (E) 90 30 120
Deviation (O  E) +4 −4 0
(O  E)2 16 16
χ 2 0.178 0.533 χ T 2 = 0.711

The expected frequencies (symbolized by f e or E) appear below their respective observed frequencies (symbolized by f o or O), and the deviations (O  E) appear in the line below the expected frequencies. Whether a given deviation is small or big depends not on its absolute size but on how big the deviation is relative to the numbers used in the experiment. Thus a deviation of 10 is large and perhaps important with 30 counts but small and probably unimportant with 1000 counts. To evaluate the relative size of the deviation, it is squared and then divided by the expected value in that column; the ratio ( O E ) 2 E is termed χ 2 , also sometimes written as chi-square. The values of χ 2 for each column are added up to give a total χ T 2 = 0.711 . This is a measure of the overall discrepancies between O and E for each cell, and the larger the discrepancy the larger will be the value of χ T 2 .

Does this value of 0.711 helps to support or refute the null hypothesis? One way of deciding would be to draw a large number of samples of 120 plants from a population with a 3:1 ratio of Tall:Short plants, calculate χ 2 for each sample, and determine how often any given value of χ T 2 occurred. Large values of χ T 2 would occur infrequently, and we could estimate the probability of getting a value as big as or bigger than any given χ T 2 . If the probability of getting χ T 2 = 0.711 is low, we would tend to reject the null hypothesis, but if the probability is high, then we would not be able to reject the null hypothesis. Fortunately, we do not have to do these experiments because of the similarity of the distribution of chi-square to the χ 2 distribution described in Chapter 8. (There is some possibility for confusion in the use of symbols here. Most but not all texts distinguish between these two distributions.) A brief explanation of the equivalence is given by Altman (1992, p. 246).

For the example above, the χ T 2 = 0.711 is referred to a table of the χ 2 distribution, and with 1   df, P   =   0.399, so that if the null hypothesis is true, then about 40% of the time, samples of 120 plants drawn from this population could have ratios as deviant from 3:1 as 3.62:1 or even more. Such an estimate would not allow us to be comfortable in rejecting the null hypothesis; we conclude that there is an acceptable fit between the observed and expected results.

Continuity Correction

Because the χ 2 distribution is continuous and if we examine only two groups, in a large series of experiments in which the null hypothesis is known to be true, the values obtained cause us to reject the null hypothesis more than the expected number of times for any critical value of χ T 2 (type I error). To reduce the error, Yates' correction for continuity is often advised, especially if the actual numbers are small (Yates, 1934). (Yates (1902–1994) was Fisher's assistant at Rothamsted and became head of the unit when Fisher moved to University College London.) To make this correction, the absolute value of the deviation (written as |O   E|) is made smaller by 0.5: +4 becomes +3.5, and −4 becomes −3.5. The result is to make χ T 2 smaller than it would have been without the correction ( χ T 2 becomes 0.544 in Table 14.2b), and the excessive number of type I errors is abolished. Yates' correction for continuity is made also with 2   ×   2 tables, but should not be used for larger tables. The correction is used only when there is one degree of freedom (see below).

The need for such a correction is disputed (Adler, 1951; Conover, 1974; Maxwell, 1976; Rhoades and Overall, 1982; Upton, 1982). Yates (1984) analyzed the defects in alternative approaches. Some of the issues are discussed clearly by Ludbrook (Ludbrook and Dudley, 1994) who compared these various corrections. For the same data set he obtained two-sided values for P ranging from 0.0281 to 0.0673. Yates' correction certainly increases the risk of accepting the null hypothesis falsely (type II error). If, however, the decision about statistical significance or not depends on whether or not Yates' or some other correction is used, it is better to consider the results of the test as borderline or, better still, to use another test such as Fisher's exact test (Camilli, 1990; Yates, 1984) (see below).

The chi-square test is not restricted to two categories. Continuing with the genetic example, with two pairs of dominant–recessive alleles—one for Tall versus Short, one for Green versus Yellow—the expected ratios for the F2 plants are 9 tall green, 3 tall yellow, 3 short green, and 1 short yellow, or 9:3:3:1. Assume that an experiment gives the results shown in Table 14.2c.

Table 14.2c. Extended Mendelian experiment

Tall green Tall yellow Short green Short yellow Total
Observed (O) 94 22 33 11 160
Expected (E) 90 30 30 10 160
Deviation (O  E) 4 −8 3 1
(O  E)2 16 64 9 1
χ 2 0.178 2.133 0.300 0.100 χ T 2 = 2.711

The calculations show a value of χ T 2 of 2.711 with 3   df, and from the χ 2 table P   =   0.438, so that we would not on this basis reject the null hypothesis.

In general, we are interested in large values of chi-square, because it is these that answer the question about whether or not to reject the null hypothesis, and so pay attention to the area on the right-hand side of the chi-square curve where values above certain critical values are found. On the other hand, even if the null hypothesis is true, there should be a certain degree of variation between the observed and experimental data, and if the two sets of data are too much alike, we might be suspicious about why that occurred. Fisher once reviewed a series of experiments reported by Mendel, and after combining their individual chi-squares obtained a total chi-square of 42 with 84   df (Fisher, 1936). The area under the chi-square curve on the left-hand side gives the probability of getting this value, and it is about 0.00004. Therefore either a very unusual event had occurred, or else someone had manipulated the data to make it agree so closely with theory. There is evidence that this was done by a gardener who knew what answers the Abbott Mendel wanted to get.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128023877000147

Statistics and evidence-based healthcare

Louise Brown , in Basic Science in Obstetrics and Gynaecology (Fourth Edition), 2010

Parametric and non-parametric tests

Parametric statistical tests are ones where assumptions are made about which mathematical distribution best represents the sample and the population from which it was taken. Non-parametric statistical tests are ones where no assumption has been made about the distribution of the data. In general, parametric tests tend to be more powerful and sensitive than non-parametric tests and therefore tend to be preferred as fewer observations are required to provide evidence in favour of the hypothesis if it is true. A typical example of a parametric test is the use of a Student's t-test to compare the mean values of a continuous variable between two groups. One of the test assumptions is that the continuous data measured in the sample can be assumed to follow the normal distribution. If this assumption is not valid, then the non-parametric Mann–Whitney U test can be used which ranks the observations in order of size and compares the proportions that fall above and below the median value for each of the groups in question. Thus, the Mann–Whitney U test is less sensitive to large outlying values but also less informative as observations above or below the median are all treated in the same way.

Deciding whether to use parametric or non-parametric tests

For binary data, the assumption of a binomial distribution will be valid for small samples of <20, but both the binomial and normal distributions can be assumed for samples of binary data with >20 observations. When comparing proportions across binary, categorical or ordinal data, the chi-squared distribution is often assumed; however, if the numbers in the categories becomes very small then it is often more appropriate to use Yates' correction or Fisher's exact test, both of which are described in any standard statistical textbook.

Probably the most common example of deciding whether to use a parametric or non-parametric test is when you want to know whether the continuous data in your sample can be assumed to follow a normal distribution. In general, for small samples of less than about 15 observations, it is not safe to assume the data are normally distributed and non-parametric methods should generally be employed. However, it should be remembered that these tests are less powerful and the sample size is small, which will make the statistical results hard to interpret. If you have a reasonably large sample size, the first thing to do is to plot your data points on a scattergraph or group the data into bins and plot them on a histogram. Inspection of the graphs or histograms is the simplest way of assessing whether your distribution assumptions are valid. Deviations from the normal distribution can lead to significant skewness or kurtosis. Figure 14.3 demonstrates histograms for data that follow a normal distribution or have a positively or negatively skewed distribution, and Figure 14.4 shows how data can deviate from the classic 'bell-shaped' curve seen in the normal distribution and exhibit kurtosis. Kurtosis is concerned with the shape of the distribution and can have a considerable impact on the statistical analysis that you choose to perform on your data. When kurtosis is extreme, non-parametric tests should be used. It is worth noting that for data that are perfectly normally distributed the mean, median and mode values are all the same, whereas for positively skewed data the mean tends to be larger than the median and vice versa for negatively skewed data. When summarizing skewed data, it is often better to quote the median and interquartile range rather than the mean and standard deviation which are generally used for summarizing normally distributed data. The way to calculate these summary statistics is described in the next section. In some cases, it helps to convert skewed data into another variable that can be assumed to follow the normal distribution (this is called transformation). For example, data that are positively skewed can often be manipulated into a more normally distributed format by transforming them onto the log scale; the t-test can then be used on the log-transformed data. There are more formal ways of testing your assumptions about the normal distribution, such as normal plots and Shapiro–Francia or Shapiro–Wilk tests, but these should be used cautiously and, if there is doubt, you should revert to using non-parametric methods.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B978044310281300018X

Trial Design, Measurement, and Analysis of Clinical Investigations

Timothy Beukelman , Hermine I. Brunner , in Textbook of Pediatric Rheumatology (Seventh Edition), 2016

Two-Sample Tests

The two-sample test to be used is determined by the level of the data and by certain other assumptions, as defined later.

Chi-Square Test with One Degree of Freedom

For categorical (nominal) data and ordinal data with very few ranks, the most frequently used hypothesis test is the Pearson chi-square (χ 2 ) test. This nonparametric statistical test of inference is for assessing the association between the two variables. It is most commonly performed on contingency tables such as a 2 × 2 cross-tabulation, which has one degree of freedom (1 df). The significance of the resulting chi-square statistic is determined from a table of critical values.

Most tables of critical values report two-tailed probabilities; the P value is divided by 2 to find the one-tailed probabilities. Chi-square analysis with greater than 1 df (i.e., tables larger than 2 × 2) requires larger values to be significant; the Yates continuity correction is used to compensate for deviations from the theoretical (smooth) probability distribution if the total N assessed in the contingency tables is less than 40.

Fisher Exact Test

The Fisher exact test is used as a replacement for the chi-square test when the expected frequency of one or more cells is less than 5. This test is commonly used in studies in which one or more events are rare.

McNemar Test

The chi-square test assumes independence of the cells, as noted earlier. Experimental designs exist for observing categorical outcomes more than once in the same patient. The McNemar test (also known as the paired or matched chi-square) provides a way of testing the hypotheses in such designs. An example for the use of this statistic may be to test two different concentrations of an analgesic lotion that are given to 51 patients in sequence. The null hypothesis is that the proportion of patients who experience relief when they apply analgesic lotion 1 is the same as the proportion of patients who experience relief when they apply lotion 2. Alternatively, the McNemar test would be used when comparing the effects of the two analgesic lotions in two groups of patients that are matched for independent variables that may influence the dependent variable (i.e., the proportion of patients with pain relief).

Mantel-Haenszel Chi-Square Test

The Mantel-Haenszel chi-square test is known as a stratified chi-square test and is frequently used to detect confounding variables. The procedure involves breaking the contingency table into various strata and then calculating an overall relative risk, with the results from each stratum being weighted by the sample size of the stratum.

Common Errors with Chi-Square Tests

Perhaps because of its frequent use, the chi-square test is often employed or interpreted inappropriately. Common mistakes include unnecessary conversion of continuous or ordinal level data to categorical data to use the chi-square test, nonindependence of the cells in the table (an exception is when the McNemar chi-square test is used); use of the chi-square rather than Fisher exact test when expected cell frequencies are lower than 5; and confusion of statistical significance by chi-square values with clinical or biological importance.

Student t Test

What the chi-square test is to categorical data, the t test is to continuous data. This test is used for comparing two sample means from either independent or matched samples. The matched t test is more efficient (i.e., more powerful) than the Student t test for independent groups.

Nonparametric Tests

The t tests described earlier are parametric tests. That is, they make assumptions about the underlying distributions, including normality and equality of variances between groups. The t test is a very robust test; it is still valid even if its assumptions are substantially violated. If the violations are severe, the investigator may transform the data using either natural logarithms (described earlier) or nonparametric tests. Nonparametric tests ignore the magnitude of differences between values taken on by the variables and work with ranks; no assumptions are made about the distribution of the data. For two-group comparisons, either the Mann-Whitney U test (also known as the Wilcoxon rank sum test) is used for independent data or the Wilcoxon signed rank test is used for paired data.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780323241458000065

Prognostic role of lymphatic vessel invasion in early gastric cancer: A retrospective study of 188 cases

Caigang Liu , ... Junqing Chen , in Surgical Oncology, 2010

Statistical methods

All the data were analyzed with SPSS 13.0 statistics software (Chicago, IL, USA). Chi-square test (with Yates correction when necessary) and independent t-tests where appropriate were used to compare the clinicopathological factors between patients with and those without lymphatic vessel invasion. Multivariate analysis was performed using the Cox proportional hazards model selected in forward stepwise. Hazard ratio and 95% confidence interval (95% CI) were calculated. Kaplan–Meier method and Log rank test were adopted in the analysis of survival rate comparison. A P value of less than 0.05 was considered statistically significant.

Read full article

URL:

https://www.sciencedirect.com/science/article/pii/S0960740408001047

Perforated diverticulitis: To anastomose or not to anastomose? A systematic review and meta-analysis

F. Shaban , ... S. Holtham , in International Journal of Surgery, 2018

3.4.1 Meta-analysis of mortality

Thirteen of these studies are included in the meta-analysis. In the study by Nagorney et al. [36] there was no mortality in the anastomosis group (4 patients). The Yates's correction of adding 0.5 to any value of zero in the meta-analysis meant that the anastomosis group had an apparent 12.5% mortality compared to 8.7% in the Hartmann's group. Therefore, the study was excluded from the forest plot.

The funnel plot (Fig. 3) and Q-Q plot (Fig. 4) show that overall there was a low heterogeneity (I2 = 0.10%, Q 13.7564, df = 12, p = 0.3165). Trenti et al.'s study [27] is an outlier, most likely because the results very much favour an anastomosis; it is the only study with the forest plot that does not cross the line of no effect. As discussed earlier, this is because of substantial selection bias. Tudor et al.'s study [34] is close to the edge of the funnel, this is likely due to the very small number in the anastomosis group (n = 8) with an unusually high mortality (75%). Richter et al. [28]., Medina et al. [35], and Drumm et al. [37] are low down in the funnel plot due to the small number in one or both of the groups. The rest of the studies are clustered at the top indicating very similar characteristics.

Fig. 3

Fig. 3. Funnel plot of mortality (Hinchey III-IV studies). I2 = 0.10%, Q 13.7564, df = 12, p = 0.3165.

Fig. 4

Fig. 4. Q-Q plot of mortality (Hinchey III-IV studies).

The forest plot (Fig. 5) shows an overall effect with a relative risk of 0.92 (log RR -0.08, p = 0.0019), in favour of a primary anastomosis. Although the effect size is small, it does not cross the line of no effect (95% confidence intervals −0.13 to −0.03). The Fixed-Effects model gave identical results.

Fig. 5

Fig. 5. Forest plot of mortality (Hinchey III-IV studies).

Read full article

URL:

https://www.sciencedirect.com/science/article/pii/S1743919118315747

Laparoscopic versus open pancreas resection for pancreatic neuroendocrine tumours: a systematic review and meta-analysis

Panagiotis Drymousis , ... Andrea Frilling , in HPB, 2014

Statistical analysis

This study was performed in line with the recommendations of the Cochrane Collaboration. 16 Dichotomous variables were analysed using odds ratios (ORs), which represented the odds of an event occurring in the LPS group compared with the OPS group. An OR of < 1 favoured the LPS group and the point estimate for the OR was considered statistically significant if the P-value was < 0.05, provided the 95% confidence interval (CI) did not include the value 1. Studies that contained a zero value for an outcome of interest in both the LPS and OPS arms were discarded from the analysis for this particular event. If a study contained a zero value for an event in one of the two groups, Yates' correction was added. The effect of Yates' correction is to prevent the overestimation of statistical significance for small data when 'zero cells' are present in a 2 × 2 contingency table. Such zero cells are reported to overestimate the OR measure and the corresponding standard deviation (SD). 17 For the Yates' correction, a value of 0.5 is added to each zero cell of the 2 × 2 table for the study in question.

In the analysis of continuous variables the weighted mean difference (WMD) was calculated. A random-effect meta-analytical technique was used for both continuous and dichotomous outcomes. In a random-effect model, it is assumed that there is variation among studies and therefore the calculated OR has a more conservative value. The random-effect model was selected to account for the heterogeneity produced by the inherent differences in the study population: patients were operated at different centres by different surgeons; the selection criteria for each surgical technique were inconsistent, and patient risk profiles were variable.

A qualitative assessment of the studies was performed, following the Newcastle–Ottawa Scale. 18 For the assessment, each study was examined on three factors: patient selection; comparability of the study groups, and assessment of the outcome. A score of 0–9 stars was assigned to each study according to the coding manual for cohort studies of the Newcastle–Ottawa scale. Heterogeneity was assessed in a sensitivity analysis using the following groups: (i) all studies, and (ii) studies reporting only on insulinomas. A sensitivity analysis on high- versus low-quality studies based on the Newcastle–Ottawa score was not feasible as all included studies scored between 6 (one study) and 7 (10 studies) on the relative scale (Table 1).

Table 1. Characteristics of studies reporting on patients with pancreatic neuroendocrine tumours (PNET) submitted to open pancreatic surgery (OPS) or laparoscopic pancreatic surgery (LPS)

Authors Year Study type Type of PNET Period of patient recruitment Country Patients, n LPS, n OPS, n Conversion, n Study quality (Newcastle–Ottawa scale)
Espana-Gomez et al. 19 2009 Retrospective Insulinomas 1995–2007 Spain 34 21 13 7 *******
Gumbs 20 2008 Retrospective Functioning (23%)
Non-functioning (77%)
1992–2006 France 31 18 13 1 *******
Hu et al. 21 2011 Retrospective Insulinomas 2000–2009 China 89 43 46 2 *******
Karaliotas &amp; Sgourakis 22 2009 Retrospective Insulinomas 1999–2008 Greece 12 5 7 1 *******
Kazanjian et al. 23 2006 Retrospective Functioning (29%)
Non-functioning (71%)
1990–2005 USA 70 4 66 NR *******
Liu et al. 24 2007 Retrospective Insulinomas 2000–2006 China 48 7 41 3 *******
Lo et al. 25 2004 Retrospective Insulinomas 1999–2002 China 10 4 6 0 *******
Roland et al. 26 2008 Retrospective Insulinomas 1998–2007 USA 37 22 15 2 *******
Sa Cunha et al. 27 2006 Retrospective Insulinomas 1999–2005 China 21 12 9 3 ******
Zerbi et al. 28 2011 Prospective Functioning (27%)
Non-functioning (73%)
2004–2007 Italy 262 21 241 NR *******
Zhao et al. 29 2011 Retrospective Insulinomas 1990–2010 China 292 46 246 19 *******
Total 906 203 703

NR, not reported.

Read full article

URL:

https://www.sciencedirect.com/science/article/pii/S1365182X1531580X

Do antidepressants t(h)reat(en) depressives? Toward a clinically judicious formulation of the antidepressant–suicidality FDA advisory in light of declining national suicide statistics from many countries

Zoltán Rihmer , Hagop Akiskal , in Journal of Affective Disorders, 2006

In the 21 countries with decreased suicide rate, the decrease was greater in females in 19 countries (19/21   =   90%), while in the 9 countries with increasing suicide rates the increase was higher in males in 8 countries (8/9   =   89%, chi-square with Yates correction: 14.46, p  =   0.0001). This is in good agreement with the findings of Gotland Study, showing that the decreased number of depressive suicides was almost exclusively the consequence of the decrease in female depressed suicides (Rihmer et al., 1995; Rutz et al., 1997). It has been also repeatedly demonstrated that among suicide victims, females contact much more frequently their GPs or psychiatrists some weeks or months before their death (Rutz et al., 1997; Luoma et al., 2002). It has been also demonstrated that the increase in the utilization of SSRIs is more and less pronounced in females in several countries including Sweden, the United States and Australia (Isacsson, 2000; Hall et al., 2003; Grunebaum et al., 2004).

Read full article

URL:

https://www.sciencedirect.com/science/article/pii/S0165032706001728