I don’t think this paper supports your premise (“climate change is not a problem, but rate of change is”)
They are essentially determining the relative temperature rates of change in different environments. They give no indication how fast this velocity is relative to the past, or any predictions as to how it affects anything tangible. For all we know the velocity is less extreme than average (I’m not saying this is likely - just that we can’t tell from their data or analysis).
I would call this a “water is wet” paper - the basic conclusion is “different environments respond to climate change at different rates” which is intuitive and doesn’t need research. I admit that I could be missing some novelty but based on reading a lot of other “water is wet” climate papers, I doubt it.
I don't understand your criticism, you're looking for things that don't need to be there.
The authors of the paper propose a model to measure how much the positions of individual climate environments are changing per year. That seems like a great measure to determine the impact on animal and plant species - they describe that only 8% of environments stay at similar levels for at least 100 years, meaning that all other animals and plants will have to drastically adapt their lives to new environments.
You're looking for comparisons with earlier changes in climate, but the paper is just about proposing a model to determine change in position of environments. Why should such a paper also compare with historical data?
I disagree, you can’t tell how useful their model is for assessing impact on animal and plant species because they do not have any animal or plant data in their model. Maybe animals and plants can move faster than we thought so the current velocity doesn’t dictate local extinctions? Maybe temperature velocity is a bad measure of climate change rate with respect to ecosystems?
My criticism is that the paper takes half of the problem, and uses some ambiguous criteria (eg 100 years) so their model is effectively useless on its own.
It’s like writing a paper that describes how long your piece of string is; unless your measurement method is novel it’s useless for anyone else wanting to measure their own string.
> I disagree, you can’t tell how useful their model is for assessing impact on animal and plant species because they do not have any animal or plant data in their model. Maybe animals and plants can move faster than we thought so the current velocity doesn’t dictate local extinctions? Maybe temperature velocity is a bad measure of climate change rate with respect to ecosystems?
Are they making any judgement on this? I can't see it. I just see a proposed model for the change of position of environments. They're not saying "and this will lead all animal species to die out" or "we must act now", they just say "this is how the environments change positions". So why are you looking for more?
> My criticism is that the paper takes half of the problem, and uses some ambiguous criteria (eg 100 years) so their model is effectively useless on its own.
What use are you looking for? They proposed a useful model that other papers can build upon. That is how science works. Why must they look at this problem from every angle you want for the paper to be useful?
> It’s like writing a paper that describes how long your piece of string is; unless your measurement method is novel it’s useless for anyone else wanting to measure their own string.
No, it's more like a paper that describes a model to determine the length of a piece of string by showing it for a specific example. You find it useless because you say "hey, you guys didn't look at my string!".
I admit I’m probably being a bit too picky, but I honestly don’t think this is how science works, I think this paper is a good example of the problem of low quality papers clogging up academic publishing due to perverse incentives.
I had a quick look through the first half dozen citations on the paper, and it is a lot of stuff like this: “The effects of modern climate change are occurring more quickly in grasslands relative to many other ecosystems (Loarie et al. 2009;” This quote doesn’t really use any of the data/methods of the original paper, it just uses the paper to support the fairly obvious idea that climate change has variable effects that differ by environment.
Another: “ On the other hand, the climate is shifting rapidly (Loarie et al. 2009)” again nothing is the paper is used, and they claim the paper says change is rapid, but to determine what is “rapid” we would need to compare our velocity to a value we think is “not rabid”. This would provide us with some testable hypothesis (eg “2% of regions move velocity faster than the fastest moving population in the region”).
I don’t think my analogy was very good - I’m not critical because they didn’t look at my string, I’m critical because i don’t think their paper can truly help other teams measure their bits of string (or my piece).
I agree their method might be useful, but it seems pretty obvious and I don’t understand why they wouldn’t apply it to a scientific theory themselves. Currently they have a model looking for a theory to test, but without any indication that their model is useful for any particular hypothesis.
So many climate papers are just models with no link to any tangible hypothesis/predictions. This means they are unfalsifiable and so unscientific in Popper’s epistemology. I would even argue they are unscientific in terms of Lakatos’ Research Programmes because they are likely to be used to support auxiliary climate hypothesis in degenerating programs (since if they were part of a progressive program they would have explicit theory/predictions).
> I had a quick look through the first half dozen citations on the paper, and it is a lot of stuff like this: “The effects of modern climate change are occurring more quickly in grasslands relative to many other ecosystems (Loarie et al. 2009;” This quote doesn’t really use any of the data/methods of the original paper, it just uses the paper to support the fairly obvious idea that climate change has variable effects that differ by environment.
I don't see a reference to Loarie et al. 2009, and I can't find that quote in the article.
> Another: “ On the other hand, the climate is shifting rapidly (Loarie et al. 2009)” again nothing is the paper is used, and they claim the paper says change is rapid, but to determine what is “rapid” we would need to compare our velocity to a value we think is “not rabid”. This would provide us with some testable hypothesis (eg “2% of regions move velocity faster than the fastest moving population in the region”).
I was referring to citations of the paper (theoretically if these researchers are citing the paper, then they should be using the ideas, model or data from the paper).
It has 2172!! citations! And in many(most?) cases these citations are pointless. It’s essentially used to pad out other ‘water is wet’ papers and/or hack researchers indexes.
If it were a scientific article, the citations would reference it to use/improve/critique their hypothesis/model/data. I’m sure there’s a couple of these among the 2172 citations, but good luck finding the seed among the chaff!
I think this stuff annoys me because of the huge opportunity cost. Climate change is such a big problem with so many critical questions unanswered, yet tomes of research get funded that tell us nothing about how to solve the problem. And worse: we get such volumes of drivel that serious researchers have to wade through heaps of nonsense to find quality papers.
I decided to have a deeper look at this during my lunchbreak. I found a bit of a smoking gun: they claim that "probability distribution function of our temperature-based velocities are consistent with those described previously when uncertainty is accounted for (Supplementary Table 2)" If you look at Supplementary Table 2, it looks like their estimates are in line with existing studies (Malcolm 2002).
If you actually look at the source of this comparison table (Table 2 Malcolm 2002), you will find that their estimates do not align well. E.g. for 0-315m/year they estimate 37.6/90/15 % for mean/lower/upper. The equivalent measure in Malcolm 2002 is 71.1 but has uncertainty of only +/-1.23%! So their model is "consistent" with existing models because 71.15 is within their range of 15-90%. Figure S23 looks even worse.
So essentially they have proposed a model which doesn't really match existing literature and can't really be easily compared or used by others. They explain the differences away with assumptions but don't explain why their assumptions are better/worse. They might have produced a worse model but we don't have any data to tell!
But maybe I'm completely wrong and this paper is super-useful to those in this field. I'd still have to see at least 1 paper actually use their model to be convinced.
https://www.nature.com/articles/nature08649