[obol] Re: Credibility (shorter & less rambling version)

  • From: "Tom Crabtree" <tc@xxxxxxxxxx>
  • To: <joel.geier@xxxxxxxx>, "'Oregon Birders OnLine'" <obol@xxxxxxxxxxxxx>
  • Date: Tue, 8 Jul 2014 13:58:41 -0800

I think Joel did a great job of explaining the concept of “rarities” within the 
context of Ebird and BirdNotes and similar databases.  As a regional reviewer 
for Ebird (Deschutes, Crook & Jefferson Counties) I see a lot of merit in his 
suggestion to monitor some species that are rare from a conservation standpoint 
and not from a statewide or range standpoint.  This is easily done by local 
reviewers.  We all have access to the filters that Ebird uses.  Getting them to 
be accurate and meaningful is a constant work in progress.  I have set some 
filters to zero for the county because of birds that are severely declining 
locally.  

 

Willow Flycatcher is one such bird in Deschutes. I have received a bit of blow 
back on this from people, particularly when I ask how they separated it from 
Western Wood-Pewee and the other empids found in the region.  But when locals 
only find a handful of birds in migration and virtually no breeding birds, it 
makes sense to change the filter to “0” for sensitive species so we can better 
keep track of them.

 

Tom Crabtree, Bend

 

From: obol-bounce@xxxxxxxxxxxxx [mailto:obol-bounce@xxxxxxxxxxxxx] On Behalf Of 
Joel Geier
Sent: Monday, July 07, 2014 9:36 AM
To: Oregon Birders OnLine
Subject: [obol] Re: Credibility (shorter & less rambling version)

 

Hi again all,

Looking at what I wrote last night, it seems to be a good illustration of why 
it seldom pays to start writing something after normal bedtime.

Here is an attempt at a more concise version, which will hopefully make better 
sense:

My suggestion is that the current shape of the eBird review process, with its 
strong focus on "rarities" (whether in time or space), is mainly an outcome of 
an uneasy marriage between science (the effort to gain knowledge about bird 
distributions etc.) and competitive birding (a sport in which participants 
compare their accomplishments in terms of "big day" lists, "big year" lists, 
etc.). Again, I am sympathetic because we had to deal with the same issues 
concerning BirdNotes.

Let's start with the premise that eBird, like earlier similar projects, was set 
up with the aim to collect casual bird observations by birders, with the hope 
of gaining useful information about bird occurrence patterns.

Many of the most active birders in a given region tend to be list-oriented. The 
most common types of lists, apart from yard lists, tend to be based on 
politically defined regions such as states, provinces, or counties. Hence the 
most active birders tend to be keenly aware of the status of birds in the 
states or counties where they regularly go birding.

It is arguable whether rare-bird reports have any significant impact on the 
scientific aims of data-gathering projects such as eBird. For a robust 
scientific analysis of data, one normally excludes "outliers." Significant 
findings (in terms of timing of migration, breeding range extensions, etc.) 
generally need to be supported by a large number of observations, not just one 
or two unusual reports.

For example, it would be ill-advised to re-draw the range map for Common 
Yellowthroat just because someone reported one in Wheeler County, regardless of 
whether that observer is considered "credible" or not by their peers in the 
birding community. However, if multiple observers start to see a species with 
regularity during nesting season (as we've seen in the case of Red-shouldered 
Hawks in recent years here in Benton County), then there is robust support for 
changing the range map. If a trend is real, eventually good observations will 
swamp the anomalies.

That approach works OK in a purely scientific effort. The problem is that eBird 
(like BirdNotes before it) is trying to glean scientific data from a 
recreational activity, in which the leading participants -- list-oriented 
birders - tend to pay lots of attention to outliers, a.k.a. "rarities." 

When "rarities" (I'm using quotes here because most "rarities" are common 
someplace else) show up in a database, list-oriented birders tend to take 
notice. If they feel that some of these "rarities" were incorrectly identified, 
they start to criticize the data gathering effort. We saw this with BirdNotes, 
and we're still seeing it with eBird. 

If a database loses credibility among these birders -- who tend to be among the 
most prominent birders in a given state -- their views might discourage other 
birders from participating in the data gathering effort.

Hence even if "rarity" reports are usually not significant for the most 
credible types of scientific analysis that could be done using data from 
recreational birding, they can have a big impact on how the database is viewed 
by birders. This provides motivation to give special attention to "rarity" 
reports, far out of proportion to their actual scientific or conservation 
significance.

For the sake of science and bird conservation, I suggest that a different focus 
to the review process would be desirable. Even if it's still necessary to flag 
county-level rarities in order to keep up appearances with list-oriented 
birders who are keenly alert to any surprises in their favorite patch of real 
estate, why not put at least equal focus on birds that are of conservation 
concern?

For example, when I've looked up Vesper Sparrow records in western Oregon 
(where the nesting subspecies, Oregon Vesper Sparrow, has been on the state 
list of Species of Concern for many years), I seldom see any details that I 
would expect if these records were being reviewed. For a population that seems 
to be well below 2000 birds (and falling), incorrect identifications could  
have a significant impact on the picture that emerges from eBird. Beginning and 
even intermediate birders could easily mistake a Song Sparrow for a Vesper 
Sparrow, and even advanced birders could sometimes get fooled when trying to 
identify one by ear (for example, if you hear a distant Bewick's Wren or Song 
Sparrow). 

Again, I think that censoring data would be a mistake -- this introduces its 
own type of bias in the dataset. Rather, I would like to see birders being 
encouraged to clarify the basis for their identification of species of 
conservation concern: Was the bird seen, seen and heard, or heard only? And how 
well was it seen or heard? This type of information can be useful for following 
up reports, to try to confirm whether a species of concern is using a given 
patch of habitat.

I think if birders see more focus on conservation issues, and less on 
"rarities," there could also be a positive effect on data gathering: more focus 
on correctly identifying birds for which a few data points could really make a 
difference, and less focus on trying to list more species for the 
year/month/county/state whatever.

Good birding,
Joel

--
Joel Geier
Camp Adair area north of Corvallis 

Other related posts: