![]() ![]() ![]() ![]() Just switching between 2 EPUBs (with a gesture, to not bring in TouchMenu code) also makes RAM usage grow up a bit. I'm going to see how I go for some time with these HTML dicts disabled. A simple dict lookup involving a HTML dict and so bringing in MuPDF (although it might already be brought in for the InfoMessage icon in the "Opening." initial popup) and I'm already at 60M ! That's with the first dict result being from Robert2007 dict, which has some fancy HTML.ĭisabling it, doing the same workflow, first result being from HTML XMLLittre, and I'm only at 25M.ĭisabling XMLLittre and first result being non-HTML Académie française dict and I'm at 24M. Right now after running a few days my KOReader says it's using 46 MB TLDR: yes, keep restarting KOReader from time to time when you see it reaches some 100-150M mem usage (that's what I do.) The other option would be to not load C libraries in the main process, and have them done in a buddy process with interprocess communication :) Then, have the engine (C code) allocate their stuff outside that, so these 100M don't get fragmented/filled by larger block of memory needed by the C engines. What I'd really like, but found no way to have it ensured, is: allocate 100M for lua stuff, and keep it working in that area. So, I'm inclined to think it's Lua/Luajit memory that gets fragmented and badly re-used - so, it keeps increasing when more is needed for a new dict lookup or new book opened and engine initialising. Keeping them keeps me longer under 200M, with indeed some slowness. I tried removing them, but then I quite quickly reach 250M (while being quite more responsive - but then you get some spurious random micro slow down when the auto GC happens at any time) - and eventually some crash. It's also getting slower when switching to FM, or when closing History (when opened from the reader): we have a few calls to collectgarbage() on these occasions, and this takes more time when RAM is at 120M than when it is at 20M. It increases a lot when dict/wikipedia lookup (dict often triggering MuPDF to render HTML dict results). It increases a bit when switching document (but also somtimes decreases, so I guess it's not really the engines leaking). I start at 20M, and it gets slower when I reach 120M. CBZ and PDF indeed cache bitmaps of previous pages in RAM, up to some limit. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |