@bareheiny - I think ultimate you're the guy I'll need to lean on to find
the optimal solution for this problem.
On Sat, Dec 7, 2019 at 9:54 AM Darryl L. Pierce <mcpierce@xxxxxxxxx> wrote:
So I'm working on re-enabling the multi-comic scraping today (the code's
nearly complete) and I'm thinking about comic selection. Specifically, how
the "Select All" button is no longer really doing that; i.e., since we
don't have the whole library in memory, there's no way to select "all". You
can only select, at best, 100 comics depending on the number of comics you
display per page.
This has me thinking that neither the current model (downloading only a
page at a time) or the previous model (downloading the full library) is the
right answer. So I'm looking for ideas or suggestions for how we can do
My first idea is re-enabling the constant background update, but limit it
to only returning high-level details for the comics (id, publisher, series,
issue #, characters, teams, locations, stories) (effectively, that's
everything that makes up the collections). I thing that would go much
faster that the previous downloading of the full set of details for each
On the backend, though, it still takes same amount of work to fetch the
data, but should be much faster to marshal it for the response since it's a
small number of fields (and no page details).
Any thoughts, ideas or suggestions?
Darryl L. Pierce <mcpierce@xxxxxxxxx>
"Le centre du monde est partout." - Blaise Pascal
"Let's try and find some point of transcendence and leap together." - Gord