At least from this article, it's not clear just what the complaint is
really about.
Public confidence in technocratic governance declines all the time, in
large part because every 3-5 years "research shows" something that
contradicts the policies we "had" to have based on the last study ... this
is especially evident in the health care and prevention area, but also in
education and much social welfare policy. Policies are either based on, or
rationalized by, research studies that are very, very far from being
definitive enough to base social policy on. A higher standard for the
research used to justify government regulations seems appropriate ...
EXCEPT in cases of exceptional public risk, where we should err on the side
of caution until some practice, substance, or technology is proven safe
enough, once some evidence of serious risk appears, even if not yet definitive.
Industry of course is looking for a way out of the Tobacco Dilemma, and in
most cases I think is just looking to slow down the regulatory process so
it can recover costs or have time to cheaply change technologies before new
regulations take effect. But while their motives may be venal, it would
make sense to have higher standards regarding the research basis of
regulations in general, with important classes of exceptions as above.
The new twist in these regulations seems to be regarding "publication". I
don't really see why the government should be in the business of publishing
academic research, and there are a lot of reasons I can think of why it
shouldn't be. But once again, there could be different classes of
"publication". A new AIDS study could be put on the internet by a
government agency, to be available to other researchers who are qualified
to evaluate it, and then with appropriate warnings that it has not yet been
replicated, etc. to make it available to physicians, and with big red
letter warnings to make it available to the public. In those cases where
there is good reason for the government to be a distributor of information,
its "nihil obstat" and "imprimatur" (for whatever they are worth to a
skeptical public) should be balanced by some sorts of disclaimers and
perhaps by explicit indications of just how reliable the study or its
conclusions are by various criteria (peer review; online vote of qualified
experts; replication; number of peer-reviewed studies coming to other
conclusions or challenging this one; etc. etc.).
Peer review sucks. The little research there is on the effectiveness of
peer review shows that, even in the sciences, the review rating of a single
manuscript is a function of who it was sent to and that the range of
responses from reviewers is often all the way from "crap" to "publish as
is". This is one of the dirty little secrets of both science and the
academy generally. (Another is how few people ever read any given journal
article after it's published.) Peer review is not a reliable basis for
public policy or private action when anything of material importance is at
stake. It takes a LONG TIME before the reliability of research conclusions
is established. Much longer than the timescale of peer review.
Replication is a healthy standard, especially if we add to it "robustness";
that is, not just the replicability of the exact same study, but the
attempt to replicate the conclusions by doing variants of the study: in
other places, with other people (or rats), using more subjects, using more
diverse data sources, looking at the persistence of effects over longer
time periods, varying other background conditions, etc.
I think it is very healthy to direct a lot more attention to the interface
between research and policy. The technocratic paradigm: research say X,
therefore we must do Y -- dangerously elides questions of value conflict
from policy discourse. It invites the commissioning and publication of
research designed to favor pre-determined policies which reflect particular
interests and values. It abets the unwise practice of jumping to premature
policy conclusions on the basis of limited research. And if we really want
something to worry about, there is always the deeply uncomfortable question
of just how viable an ideal of participatory democracy remains, if key
policy decisions come down to making expert decisions about the reliability
of facts and the robustness of conclusions.
JAY.
At 01:09 AM 9/29/2001 -0400, you wrote:
>Scientific values? In service of no regulation or deregulation? This one is
>sort of a mind bender.
>
>
> >
> > Friday, September 28, 2001
> >
> >
> >
> > New Federal Science Standards Worry University Researchers
> >
> > By JEFFREY BRAINARD
> >
> >
> >
>University researchers are nervous about a new set of
>standards, released by the Bush administration on Thursday, that govern
>the quality and objectivity of scientific
> information released by federal agencies. They worry that the rules
>could result in costly and time-consuming
>double-checking of studies that have already undergone peer
>review.
>
>Industry advocates have supported the rules as a way to rein
>in agencies that, they say, use inaccurate scientific
>information to justify regulatory actions.
>
>For scientists, a particularly worrisome aspect of the rules
>would require agencies to ensure that any scientific results
>they release be "capable of being substantially reproduced."
>This means that agencies would have to ensure that the results could be
>verified independently.
> >
>The rules give members of the public the right to complain
>that a particular scientific study did not meet that standard
>or other measures of quality. Agencies would be required to
>respond to such complaints, review the study, and correct it
>if it is found to be in error.
> >
>To perform this review, a federal agency might be forced to
>repeat scientific studies performed in academe, said Tony
>DeCrappeo, associate director of the Council on Governmental Relations,
>a group of 144 research universities. That extra time and expense could
>discourage agencies from publishing results of the studies, depriving
>the public of useful information, he suggested.
> >
>The White House Office of Management and Budget released the rules on
>Thursday, in response to a law sponsored last year by Rep. Jo Ann
>Emerson, a Republican from Missouri.
>
>The new rule is a sequel to a similar federal policy that
>generated opposition from universities two years ago. That
>rule stemmed from legislation by Sen. Richard C. Shelby, a
>Republican from Alabama. It allowed the public to obtain
>results and raw data from federally supported research that
>agencies used to develop regulations. For example, when the
>Environmental Protection Agency moved to tighten air-quality standards
>in 1997, it relied partly on health studies by university researchers
>financed by the National Institutes of Health
>
>The policy released Thursday -- dubbed "Daughter of Shelby" by some --
>would have a wider sweep, covering studies published by agencies
>regardless of whether they formed the basis for regulations. Unlike Mr.
>Shelby's measure, however, the new rule refers only to scientific
>results, not the raw data on which they are based.
> >
>Agencies now have one year to adopt specific standards of quality
>tailored to the kinds of information they disseminate.
> > The administration plans to solicit further public comment about the
>controversial requirement that studies be reproducible, but for now,
>that requirement will stand.
> >
>In its announcement Thursday, the Office of Management and Budget said
>it had tried to clarify the rule in response to
>100 comments it received about an earlier draft, many of which came from
>universities. The final version gives agencies leeway to determine the
>"appropriate level of correction for a complaint received" about
>scientific information. Agencies can raise or lower the bar for quality
>depending on the importance of the information. The agencies can reject
>frivolous complaints.
> >
>Mr. DeCrappeo of the Council on Governmental Relations said that many
>academic studies relied on by federal agencies have undergone peer
>review, including publication in scientific journals. That should ensure
>that the studies are of sufficient quality and can be reproduced, he
>said.
>
>However, the Office of Management and Budget said that peer review may
>not be sufficient in some cases. And supporters of the office's action
>agree.
> >
>"Peer review is done in the back room, outside of the public's input,"
>wrote Jim J. Tozzi in the September 17 Federal Times.
>
>Mr. Tozzi advises businesses about regulations, and is an
>adviser to the Center for Regulatory Effectiveness, a watchdog group
>that lobbied for the new rule. "Peer-review standards for different
>journals vary substantially," he wrote, "as does the scientific
>community's acceptance of those journals."
> >
> >
> > _________________________________________________________________
> >
> > Chronicle subscribers can read this article on the Web at this
>address:
> > <http://chronicle.com/daily/2001/09/2001092801n.htm>
>http://chronicle.com/daily/2001/09/2001092801n.htm
> >
> > If you would like to have complete access to The Chronicle's Web
> > site, a special subscription offer can be found at:
> >
> > <http://chronicle.com/4free> http://chronicle.com/4free
> >
> > _________________________________________________________________
> >
> > You may visit The Chronicle as follows:
> >
> > * via the World-Wide Web, at <http://chronicle.com>
>http://chronicle.com
> > * via telnet at chronicle.com
> >
> > _________________________________________________________________
> > Copyright 2001 by The Chronicle of Higher Education
> >
---------------------------
JAY L. LEMKE
PROFESSOR OF EDUCATION
CITY UNIVERSITY OF NEW YORK
JLLBC@CUNYVM.CUNY.EDU
<http://academic.brooklyn.cuny.edu/education/jlemke/index.htm>
---------------------------
This archive was generated by hypermail 2b29 : Wed Oct 10 2001 - 15:49:22 PDT