Understand that this is due to MBH-34; but creating it under the server for tracking purposes.
I had a stab at doing this without an API, but it requires requesting each individual transcluded page, which is terribly slow to do in the page itself. So either that needs to be done from cron, or the results need to be cached (I think the old server did some caching here).
I'm going to put that work on hold again for a bit, I have Internal Server Errors to fix.
This really would be much better if it could be done through the MediaWiki API.
Ahh, perhaps the individual requests were what it was doing before, on Classic? It was indeed very slow - but I found it satisfactory. You load the page once and then just go through checking all the pages without reloading; using a separate tab (set to not retrieve latest versions) to update the entries if need be. It used to have a separation button to ask it to retrieve the latest versions so most of the time the page load was fast.
Given Dave's comments on MBH-34, I fear we may have no choice but to do the individual page requests; even though it makes it extremely slow to load the page. :/