Should you have faith in your analyst?

The first stage of the majority of business decision-making is data collection. The majority of the time, information is gathered in the form of words. Once the words are available, the data collection professionals analyze them and present the results to the decision maker. Recent scientific research indicates that these professionals frequently make errors in their qualitative data analysis. The article cites a recent scientific study as evidence.

Baxt WG, Waeckerle JF, Berlin JA, Callaham ML. Who is responsible for reviewing the reviewers? Possibility of evaluating peer reviewer performance using a fictitious manuscript. In a fictitious scientific manuscript (Ann Emerg Med. 1998 Sep;32(3 Pt 1):310-7, the author introduced ten major and thirteen minor errors. The manuscript was sent to all reviewers for the Annals of Emergency Medicine, the American College of Emergency Physicians’ official journal. The Annals of Emergency Medicine has been published for over 25 years and is the most widely read journal in emergency medicine. The manuscript describes a standard double-blind, placebo-controlled study examining the effect of propranolol on migraine headaches. 203 reviewers provided feedback on the manuscript. 80% of reviewers were faculty members at academic emergency medicine departments, and 20% were physicians in private practice.

The following conclusions were drawn from the reviewers’ comments. Fifteen reviewers made the recommendation for publication. This group of reviewers missed 82.7 percent of major errors and 88.2 percent of minor errors. Sixty-seven reviewers made revision recommendations. This group of reviewers missed 70.4 percent of significant errors and 78.0 percent of minor errors. One hundred and seventeen reviewers recommended that the manuscript be rejected. This group of reviewers missed 60.9 percent of significant errors and 74.8 percent of minor errors.

The table indicates that on average, the 15 professors who recommended publication missed 82.7 percent of major errors and 88.2 percent of minor errors. In other words, the professors missed at least four out of every five errors inserted into the manuscript. The authors defined these errors as “irreversible errors that invalidated or significantly weakened the study’s conclusions.” It is worth noting that one of the manuscript’s minor errors was a misspelling of the drug’s name. 30 of the 203 reviewers were convinced that the misspelled name was correct and used it throughout the interview. The study’s authors stated (with the usual scientific undertone) about the findings: “The reviewers were surprised at the low number of errors identified in this study. The major errors in the manuscript invalidated or undermined each of the study’s major methodological steps… Even a fraction of these errors should have indicated that the study was unsalvageable, yet the reviewers identified only 34% of these errors, and only 59% of the reviewers re-evaluated the study “ected the project.”

Consider the following:

1. The reviewers in this study were professors and private practice physicians with an average of three years’ experience reviewing scientific manuscripts for the Annals and additional years reviewing scientific manuscripts for two other scientific journals, as well as ten years of experience practicing emergency medicine. These reviewers have a significantly higher level of expertise in the subject matter of the tested manuscript than even the most experienced market researchers analyzing qualitative customer data, experienced human resource managers analyzing candidate data, lawyers analyzing patents, or investment analysts and consultants analyzing business data. Therefore, if professors and physicians are incapable of identifying significant errors in a standard scientific manuscript, what are the chances that less-trained professionals will spot gaps and inconsistencies in non-standard qualitative business data?

2. Professors were expected to identify technical errors in the manuscript in this study. The purpose of the years of training that each scientist undergoes is to identify and eliminate this type of error. Unlike this study, the vast majority of qualitative studies in business contain psychological inconsistencies and gaps, and, unlike scientists, the majority of other professionals receive little to no training in identifying psychological errors. If professors were unable to identify the majority of technical errors, what are the chances that less trained professionals will be able to identify the far more difficult psychological errors?

3. Should you be concerned when a market researcher analyzes your focus groups? A typical focus group consists of approximately 12,000 words. A typical manuscript contains approximately 3,000 words, significantly less than a single focus group. A typical market research study consists of four to eight focus groups, or sixteen to thirty-two times more text. Therefore, if the experts in this study were unable to identify the majority of technical errors in a volume of data equal to one-fourth of a single focus group, what are the odds that a market researcher will identify psychological inconsistencies (and intellectual inconsistencies) in a much larger dataset?

4. How concerned should you be when a human resource manager conducts a candidate analysis? A transcript of an hour-long interview contains approximately 6,000 words (when hiring middle and top managers, the interviews might take a whole day with an order of magnitude more words). When only a few candidates are interviewed, the total data may include 30,000 or more words (for 5 candidates). Therefore, if the experts in this study were unable to identify significant inconsistencies in a volume of data equivalent to one-half of a single interview, what are the odds that a human resource manager will do so with a much larger dataset?

5. How concerned should you be if an investment analyst analyzes some of your companies on your behalf? A tens of thousand-word annual report is not uncommon. IBM’s 2004 annual report, for example, is 100 pages long and contains more than 65,000 words. Therefore, if the experts in this study were unable to identify significant issues in a dataset that contains less than 5% of the data in the IBM 2004 annual report, what are the odds that an investment analyst will identify significant issues in a much larger dataset?

According to the Baxt et al. study, highly trained professionals such as professors and physicians frequently fail to identify significant technical errors in a standard qualitative dataset, leading to the wrong decision. What are the odds that less-trained professionals will outperform professors in identifying more difficult psychological gaps and inconsistencies in a much larger non-standard dataset? And, if the professional analysts fail, what are the chances that you will make the correct decision despite being misdirected?

2 thoughts on “Should you have faith in your analyst?”

  1. Pingback: The Parallels Between Economic Growth And The Growth Of Construction Equipment - Hire A Virtual Assistant

  2. Pingback: American’s Trust Google More Than Facebook Or TikTok Via @RebekahDunne - Hire A Virtual Assistant

Leave a Comment

Your email address will not be published.