Using Warrick and a Google crawler: http://ce3c.be/raza/
* pantheraproject.tgz (warrick, indexed files)
* rzcache.sql.gz (crawler, sql db)
Around 3000 wiki pages in total were grabbed, probably w/ some doubles,
it fetched both http and https which was needless.
Time to scrape content?