They go into it in the actual paper, if you're interested. They wanted a document store because their data is flexible (all different sizes and shapes), complex, and they needed secondary indexes. They didn't need transactions and MongoDB was very fast.
They don't give much details, but it sounds like a very superficial need that could be accommodated by Redis, Memcached, AppFabric, and on and on. Kind of surprised that they chose MongaDB as it's far from the fastest competitor.
Perhaps, but the description essentially said that there is a custom python layer that aggregates the data, and they punt it into MongoDB with some SQL extraction pattern simply to store it into that silo.
e.g. If I run a query for NAME LIKE 'BLAH%' it stores that resultset. If I run NAME LIKE 'BLAP%' it stores that resultset. If someone else comes and runs either of them, only a direct match will pull my prior results.
They needed to be able to do some queries on various indexes, so key/value stores wouldn't work. Also MongoDB tends to be only 10% slower then memcache, so not sure why you think its not fast...
Either way, it's nice to see that computation in science isn't stuck in the 90's, which is sometimes how it feels.