I downloaded the epub file from that site, converted it to text with Calibre, counted the frequencies of the words with a word frequency counter program (Word Frequency Counter), pasted that into Excel (list 1 http://pasted.co/0ba8c393).
I pasted the text of the book into Notepad++, replaced all non-word characters with newlines (\W changed to \n, first change to “Regular expression”), pasted the result into Excel, deleted duplicates (list 2), then found the frequencies of the words in this list by doing a =vlookup search on list 1. Result: http://pasted.co/a9eafc6a This was very quick and dirty to show what’s possible. You can also associate the words with the paragraphs or lines they are used in, and do cloze deletion.
To make this better, it’s good to have a list of conjugations associated with their infinitive, to get a better idea of the true frequency of a base-word, and perhaps a study of the frequencies of the words in different corpora, to exclude the most common and least common words.