If I remember correctly, the algorithm in Alpha Go used a combination of reinforcement learning and searching the space of moves and possible outcomes to determine the next move. So it does indeed use both search and learning.
Regardless, the author's point is that computation is a better way of finding and exploiting patterns/strategies than our own intuitions. The distinction between search and learning is not the important one here.
There was an important step prior to alpha go. At the time the combinatorics were in favor of Go. But someone had the bright idea to do a probabalistic search of the space. The key idea was to play a ton of random games and rate each position based what percentage that spot was included in winning games. This blew away all other go ai at the time. Sadly this was about the time I stopped having time to follow the space, so I’m not sure how this idea was further incorporated in Go AI. But it was truly a revolutionary idea at the time
In hindsight computation wasn't the important thing though. A lot of things require a lot of computation that aren't intelligent or don't scale well, like Deep Blue. The important breakthrough in AI was learning ("machine learning").
Search/Reasoning/Inference time compute, however you phrase it is still essential. You need search to improve upon learning to work in novel situations.
Regardless, the author's point is that computation is a better way of finding and exploiting patterns/strategies than our own intuitions. The distinction between search and learning is not the important one here.