Our previous post discussed the first 3 of 7 studies that appear in The Vaccine Book by Dr. Bob Sears that he claims supports Wakefield et al. (1998). We will now discuss the last 4 as they appear in the book on pages 258-259. The fourth is by Goldman and Yazbak (2004) and this is Dr. Sears' summary:
This group studied the incidence of autism in Denmark before the MMR vaccine was introduced compared to its incidence in the years thereafter. They found about a 400 percent increase in autism over those years. This group used the same data as the two Denmark studies listed on page 259, which concluded that there is not enough evidence to link mercury in vaccines to autism. In this study, however, Goldman's group concluded that there may be a link between the MMR vaccine and autism in Denmark.
It is important to mention that the Goldman and Yazbak study was published in a journal called the Journal of American Physicians and Surgeons is neither peer-reviewed nor indexed and is essentially an online magazine for those that fancy themselves scientists. Before we review the Goldman and Yazbak study, we have to discuss the study by Madsen et al. (2002) which the aforementioned study is predicated upon. Briefly, Madsen et al. examined records from entire birth cohorts born between January 1. 1991 to December 31. 1998 to look at autism diagnoses between those vaccinated with MMR and those unvaccinated with MMR. They found that there was the same risk of developing autism or an autism spectrum disorder (ASD) in both vaccinated and unvaccinated children. The adjusted relative risk of autistic disorder was 0.92; 95% confidence interval (0.68-1.24) and the adjusted relative risk of other ASDs was 0.83; 95% confidence interval (0.65-1.07) (these confidence intervals will become more relevant later). There was no association between autism diagnosis and age of vaccination or the interval since vaccination. There was also no clustering of autism or ASD diagnoses around the time of vaccination.
Goldman and Yazbak (2004) attempt to critique the Madsen et al. paper and conduct their own statistical analyses. They start by stating:
Because autism is usually diagnosed at age 5 or older in Denmark, many children born in 1994 and thereafter would not have been diagnosed by the end of the study period. The systematic error of missing a large number of autism diagnoses in the later years was a major shortcoming. Children with Asperger's Syndrome and high-functioning autism, who have minimal speech impairments and are thus not diagnosed as early as more profoundly affected children, are especially likely to be undercounted in this study.
The Madsen et al. group reported a mean age of autism diagnosis at 4 years, 3 months and the mean age of other autism spectrum diagnoses was 5 years, 3 months. They also cut-off study observations on December 31. 1999 allowing for an additional year of observation which Goldman and Yazbak fail to mention, so some children born after 1995 may have received an ASD diagnosis after the cut-off date. To be fair, however, the Cochrane Database Systematic Review criticized the unequal length of follow-up and the use of the date of diagnosis rather than symptom onset. However, they still used it in their review and concluded:
No credible evidence of an involvement of MMR with either autism or Crohn’s disease was found.
The Madsen et al. (2002) study had a total population of 537,303 (2,129,864 person-years), 440,655 (1,647,504 person-years) in their MMR-vaccinated group and 96,648 (482,360 person-years) in their unvaccinated group. They identified 738 cases of ASDs, which provides ample statistical power to their study to detect an association between MMR and ASDs, if there was one. They also relied upon autism diagnoses that are made by specialists in child psychiatry, rather than onset of symptoms which would have introduced reporting bias. Reporting bias can occur if the investigators relied upon parental reporting of symptom onset as many behaviours of ASDs could have existed prior to vaccination and missed. Whereas administration of a vaccine is a profound event and has confounded the actual onset of an ASD . If MMR causes regressive autism within days of vaccination, then a diagnosis would follow shortly thereafter and not deferred for years as Goldman and Yazbak imply. Additionally, Madsen et al. (2002) adjusted for age and time of follow-up. Goldman and Yazbak also state:
Additional flaws in the Madsen study included the unusual distribution of ages in the cohorts, censoring rules applied to cases, and failure to separate autism into regressive and classical cohorts. These and other cited methodological and statistical problems tended to mask the association with MMR vaccine, as unimmunized children were clustered in the earlier years of the study so that ascertainment was more complete in this cohort than in those immunized a few years prior to the end of the study period, when many cases of autism were missed owing to insufficient follow-up time to make the diagnosis.
yet didn't state exactly why these were a problem. There was not an unusual distribution of ages in the cohorts because Madsen et al. (2002) used population data so this ridiculous criticism can best be summed up by a quote from Adam Jacobs in his critique of the Goldman and Yazbak study, who said, “In any case, it seems a little harsh to blame the Danes for the rate at which they breed.” They also mention censoring rules but fail to explain how they were a problem. Not only were the censoring rules in the Madsen et al. (2002) study very rational, but amounted to less than 1% of their entire study cohort.
Follow-up of 5811 children was stopped before December 31, 1999, because of a diagnosis of autistic disorder (in 316 children), other autistic-spectrum disorders (in 422), tuberous sclerosis (in 35), congenital rubella (in 2), or the fragile X or Angelman’s syndrome (in 8), and because of death or emigration in the cases of 5028 children, whose data were censored.
So what this means is that the 738 children that received ASD diagnoses were not followed up after that; there was no point as they were diagnosed and included in the final analyses. Forty-five children received an autism diagnosis disorder due to genetic disorders so weren't relevant to examining an MMR-autism causation. Madsen et al. (2002) did include these 45 children in an analysis and it didn't change the results. The rest, 5028 children either died or emigrated thus isn't possible to include them in the final analyses so this amounts to a loss of 0.94%; quite a negligible percent. Again Goldman and Yazbak's assertion that only older children were likely to be diagnosed with an ASD is completely contrary to their argument that MMR causes regression within days and not years.
Then what Goldman and Yazbak do defies logic; they don't do any statistical analyses that even remotely resembles what Madsen et al. did. In fact, they perform a statistical smoke and mirrors show that Penn and Teller would have a field day with (if that was their shtick). The MMR was introduced in Denmark in 1987 and a change in diagnostic criteria for ASD diagnoses changed in 1993. Goldman and Yazbak took the 3 years preceding the diagnostic change, 1990-1992, tortured the data, then extrapolated the rate of diagnoses out to 2000, then claim a 370% increase in ASDs after the introduction of the MMR vaccine. If you look at their Figure 1, there isn't any increase in ASDs following MMR introduction and even prior to the classification change, there would be if MMR vaccination was responsible for regressive autism according to their own claim. They say:
The true confidence intervals are wider than indicated because of error associated with linear regression of the trends both before and after 1994.
No, you don't do this, confidence intervals must always be reported; they tell you the reliability of an estimate and the higher the confidence level, the wider the interval will be, depending upon the reliability. That width indicates the true range of an estimate so if they are narrow, i.e. the confidence intervals reported by Madsen et al. and thus their relative risk is highly accurate. Compare this to the confidence intervals that Goldman and Yazbak failed to report; Grove and Jacobs were kind enough to calculate Goldman and Yazbak's 95% confidence interval, which was -3.57-82.6. This is absurd and most likely why they didn't report it; in other words, their 'predicted' autism prevalence based on their 1990-1992 data spanned negative numbers. This isn't even the best part; a crucial factor is missing from their analyses, intentionally:
Because we did not request population data stratified by vaccination status, we were unable to compare vaccinated and unvaccinated cohorts as had been done in historical studies. Instead, since the vast majority of children aged 5 to 9 years received MMR vaccine, we compared autism principally in this age group in periods before and after introduction of the vaccination program.
Hey, why let some trivial detail like whether a child received the MMR vaccine get in the way of statistical analyses to determine if MMR vaccination is causing an increase in ASD prevalence. They did not use the same data as Madsen et al. (2002) did nor did their data manipulation support an MMR-autism association as Dr. Sears claims. There is simply no defense for such abuse of statistics to intentionally deceive readers.
The fifth study is by Bradstreet et al. (2004) and is not only a little gem that appears in Journal of American Physicians and Surgeons but features Wakefield as a co-author. This is Dr. Sears' summary:
This group found measles virus RNA in the CSF and intestinal biopsies of three children who had gastrointestinal inflammatory disease and autism. The only known exposure these kids ever had was from the MMR vaccine. Three control patients did not have measles detected in their samples.
They don't get off to a very good start by reporting:
All have received multiple interventions ranging from dietary modification to intravenous immunoglobulin, though the details of these interventions and the relevant outcomes are not reported here.
That's like conducting a study on tomato yield of different plants, but using various watering and fertilisation strategies and deeming these variables unimportant to control for and report. Since they are testing for anti-measles antibodies, it is kind of important to control for the fact that the children are receiving immunoglobulin, at the very least. And once again, they are relying upon parental reports of regression without validating with medical records.
Their methods for viral detection rival those of what you would expect from "Sid the Science Kid". But in actuality, are worse for they sent their samples to Unigentics and all of the same parameters as Uhlmann et al. (2002) which were de-constructed in our first post of this series. They also used Singh et al. (2003) poorly defined immunoblotting and anti-MBP assays; neither of which has been validated nor standardised for diagnostic value.
Results of CNS autoantibody and virus IgG profiling are shown in Table 3. MBP autoantibodies were present in the serum of all three children and CSF of children 1 and 2. NFAP antibody was present in the serum of child 2 only. MV IgG antibody titers were reported as high in the sera of children 1 and 2, and detectable at a low level in the CSF of these same children. MVIgG antibody titer was reported as being within the normal range in the serum of child 3 and undetectable in his CSF. Where samples were analyzed for the previously reported MMR-associated antibody, they were negative. Human Herpesvirus-6 serology was unremarkable, and specific IgG antibody was not detected in CSF of the two samples in which it was sought (children 2 and 3).
It is curious that they reported anti-measles IgG antibody as high in some of the autistic children yet didn't report the results from their “control” children. There is no “normal range” of these antibodies and a high titre is not indicative of anything. And given that some or all of these children were treated with “immunoglobulin therapy”, it's not surprising that their titres would be elevated.
Bradstreet et al. (2004) too, did not sequence their PCR amplicons, which is standard practise, that is if investigators are actually interested in the accuracy of their results. What they lack in proper methods, they try to compensate for with a bloviated discussion, that of course, is corroborated by all of the junk science that preceded. While they test for anti-MBP autoantibodies in the autistic children, they do not test the “control” children and they don't explain their results which are presented as a throw-away sentence and merely buried in a table. One wonders why they would even mention their significance in their background, not test their “controls” for them and then not explain their results nor their significance, not that they really have any. We hope you aren't getting bored but this is nothing more than yet another foray into the world of pseudo-science.
The sixth article Dr. Sears cites as “showing a link between MMR vaccine and autism” is by Kushak et al. (2005) and is actually not a published study at all but a poster presentation and what you see is the extent of it unless you attended the conference that it was presented in. This is Dr. Sears' summary:
This Harvard group essentially reproduced Dr. Andrew Wakefield's work by finding chronic inflammation, lymphoid hyperplasia, and digestive enzyme deficiency in the gastrointestinal tract of numerous autistic children. It didn't explore a possible link to measles infection from the MMR vaccine, however.
We don't see what their Harvard affiliation has to do with anything. This group did not 'reproduce' Wakefield et al.'s work; they merely examined gastrointestinal problems in autistic and non-autistic children and some enzyme values. They didn't find any significant differences in gastrointestinal problems between autistic and non-autistic children but did find some abnormal enzyme levels in the autistic children as compared to the non-autistic children that were statistically significant. Which could be easily explained by diet. We don't see how Dr. Sears came to the conclusion that the investigators found “chronic inflammation, and lymphoid hyperplasia” in any of the children when that wasn't even reported. Since this was a poster presentation in abstract form, and has absolutely nothing to do with Wakefield's findings, there is really nothing to evaluate here.
Onto the seventh and final study by Geier and Geier (2004) and this is Dr. Sears' summary:
This group studied the increase in autism over the past twenty years compared with the timing of increased thimerosal vaccines and the introduction of the MMR vaccine and found evidence that these may play a role in neurodevelopmental disorders. They recommended taking thimerosal out of vaccines and finding a safer MMR vaccine.
It is important to first point out the Geier's conflict of interest statement in this publication, which is:
Dr. Mark Geier has been an expert witness and a consultant in cases involving adverse reactions to vaccines before the U.S. Vaccine Compensation Act and in civil litigation. David Geier has been a consultant in cases involving adverse reactions to vaccines before the U.S. Vaccine Compensation Act and in civil litigation.
Also of interest is that they were developing a patent application for their Lupron Protocol which was submitted later that year. For more about the Geier's situational ethics and conflicts of interest, you can read Kathleen Seidel and Respectful Insolence. Onto the actual study.
The Geiers used U.S. Department of Education (DOE) datasets to determine autism prevalences. These data are going to show children receiving services for autism as it was diagnosed at that time. They also used CDC birth surveillance data for the estimations performed on birth cohorts from 1981-1985 and 1990-1996, leaving out 4 years, 1986-1989. There was no explanation as to why. In order to 'determine' thimerosal exposure they used the Biologic Surveillance Summaries of the CDC, of course not knowing actual vaccines received by the infants. These methods alone render this 'study' completely useless but being the brave souls we are, will forge ahead anyhow.
They essentially did the same thing as above for the estimation of MMR vaccine and autism prevalence but had even larger gaps in their data. They only used the years 1982, 1985 and 1991-1996, with again, no explanation but their exclusion becomes obvious later with our discussion of Figure 3. Stay with us here because this makes little sense; they took their selected birth cohort years, pulled out the DOE autism prevalences corresponding to those years, estimated the autism prevalence for the birth cohort and banged them together for their MMR-autism prevalence estimate. The children receiving MMR would have been too young for an actual autism diagnosis. And why bother with pesky details about when and which children were actually vaccinated when the desired results have been pre-determined? For instance, they decided to use the 1984 birth cohort as a 'baseline' but their explanation as to why is nonsensical because they haphazardly used several birth cohorts instead of tracking one which seriously confounds results due to changes in diagnostic criteria, special education expenditures, birth cohort size and ages of diagnoses. None of which they controlled for.
What they also fail to account for, particularly in the earlier cohorts for their thimerosal exposure table (Table 1) is the 1982 recommendation for Hepatitis B vaccine for infants and contained thimerosal. In other words, their Table 1 is even more wrong and skews the mercury exposure higher for later birth cohorts. So on to their other results:
In Figure 2 we plotted the average mercury dose per child in comparison to the prevalence of autism per 100,000 children for successive birth cohorts (birth cohorts: 1981 through 1985 and 1990 through 1996). Figure 2 shows that as the prevalence of autism increased from the birth cohorts from the late 1980s through the early 1990s a corresponding increase in the average mercury dose per child occurred. A maximum occurred in the birth cohort of 1993 in both the average mercury dose per child and the prevalence of autism. A decrease in both the prevalence of autism and the average mercury dose per child occurred from 1993 through 1996.
This is best described by showing you Figure 2:
If their data crunching wasn't bad enough, they seem to happily torture them graphically. As you can see, they omitted the years 1986-1989 (with no reasoning) but still presented the gap as continuous data. You just don't do that, plain and simple so at best, they are buffoons and at worst, liars. They also claim that the prevalence of autism concomitantly decreased with thimerosal exposure. No, no and no; autism prevalence did not decrease, anyone reading this knows that and their thimerosal exposure estimates are rubbish to begin with. As for Figure 3:
Figure 3 shows the number of doses of primary pediatric measles-containing vaccine in comparison to the prevalence of autism for each birth cohort examined (birth cohorts: 1982, 1985, and 1991 through 1996). This figure shows that there was a potential correlation between increasing doses of primary pediatric measles-containing vaccine and an increasing prevalence of autism during the 1980s. We determined that the slope of the line was 4831, and the linear regression coefficient for the line was 0.91.
These are the estimates for which they omitted even more years and they came up with a correlation. They could have plotted the number of artificial satellites we have in orbit or the use of smilies in internet communication and come up with a correlation. As a rule of thumb, when you want to just 'eyeball' such a graph for potential problems, remove any 2 data points and if the trend line disappears, there could be a problem. Let's take off their 1982 and 1985 data points; the trend line completely disappears. We consider the imminent possibility that the other years were intentionally omitted because they didn't plot to their liking.
Figures 1 and 4 and their corresponding results are really nothing; garbage in, garbage out. They again, cherry-picked certain years to report for Figure 4 that had no statistical significance, in spite of what they report (overlapping confidence intervals bugger up significance like that). It is also curious that this 'study' was written in 2003 but the Geier's didn't bother to use data after1996 nor acknowledge that thimerosal has not been in vaccines, or in trace amounts since 2001 and yet autism prevalence has continued to rise. The Institute of Medicine's 2004 Immunization Safety Review: Vaccines and Autism said of this study and others by the Geiers:
Other studies reported findings of an association. These include two ecological studies4 (Geier and Geier, 2003a, 2004a), three studies using passive reporting data (Geier and Geier, 2003a,b,d) one unpublished study using Vaccine Safety Datalink (VSD) data (Geier and Geier, 2004b,c), and one unpublished uncontrolled study (Blaxill, 2001). However, the studies by Geier and Geier cited above have serious methodological flaws and their analytic methods were nontransparent, making their results uninterpretable, and therefore noncontributory with respect to causality (see text for full discussion).
Dr. Sears over-inflates the results and importance of this heavily-biased study. They did not examine 20 years worth of data, at best it was 12 years and a poor job at that and didn't find any evidence that MMR, or thimerosal played any role in neurodevelopmental disorders. They are a little late to the game too if they are recommending that thimerosal be removed from paediatric vaccines when that was done 2 years prior to their 'study'. What would a safer MMR look like anyhow?
This concludes our series critiquing the studies Dr. Sears uses to support Wakefield's findings. So are you sensing a theme here? That is, bad science manages to find an MMR-autism connection, whereas good science cannot. As parents that are concerned with these issues, you should be angry, very angry. You are being bamboozled, hoodwinked, taken for a ride by these charlatans that rely upon the scientifically unsophisticated to just read introductions and discussions but gloss over the methods and results. This is the way these 'scientists' can be broken down:
A.) They are too inept to perform sound, methodological science.
B.) They aren't too inept but believe these claims and don't mind cutting corners, being the mavericks they are.
C.) They have an agenda, financial or otherwise and laugh at the ease by which they can take advantage of the naive and desperate.
If anyone can come up with a valid, alternate explanation that doesn't involve Galileo or Copernicus, we're listening. Maybe we are preaching to the choir and if so, you can feel assured about your own conclusions, but if we are not, we hope that you can at least, begin to break free of the grip of pseudo-science that makes you feel overwhelmed by the glut of information. That there really isn't two sides of this issue that are equally valid and perhaps you are better equipped to spot junk science.