Hmmm...latest pre release does not like my dB.
I’ll re-create unless there’s something broken?
On 9/12/2019, at 13:30, bareheiny (Redacted sender bareheiny for DMARC)
<dmarc-noreply@xxxxxxxxxxxxx> wrote:
Magic – thanks for the explanation 😊
From: João Miguel Côrte-Real França Pereira
Sent: Monday, 9 December 2019 1:20 PM
To: comixed-dev@xxxxxxxxxxxxx
Subject: [comixed-dev] Re: The issue of fetching the library
When a developer changes something it creates a Pull Request. They will show
up here https://github.com/comixed/comixed/pulls ;(it's empty now because all
are closed). So that the other developers can check and if everyting is ok,
they merge and close the PR.
Theb we have an action(https://github.com/comixed/comixed/actions) that after
the merge is done, it will automaticly trigger a build and publish it here
https://github.com/comixed/comixed/releases
I've just merged the PR with the caching change from Darryl so in less than
20 mins you should have a new build with that change to test.
On Sun, Dec 8, 2019, 22:56 bareheiny <dmarc-noreply@xxxxxxxxxxxxx> wrote:
I *thought* the time between get 100s was slower...but I may have been
imagining that.
Apologies for being obtuse, how would I get a copy of CX with the cover
caching? I had assumed the pre release I had would include it...but didn’t
see any evidence of caching.
From: Darryl L. Pierce
Sent: Monday, 9 December 2019 11:05 AM
To: comixed-dev@xxxxxxxxxxxxx
Subject: [comixed-dev] Re: The issue of fetching the library
The latest build doesn't, no, since the PR's not been approved yet.
When you say it seems slower, at which point do you mean? It might be a bit
slow at first due to putting images in the cache directory. Just that little
performance bump of writing a bunch of files, but should hopefully be way
faster on re-loading those images.
On Sun, Dec 8, 2019 at 4:24 PM bareheiny Alexander
<dmarc-noreply@xxxxxxxxxxxxx> wrote:
Does the latest build include the image caching?
And where would they be stored?
On 9/12/2019, at 08:45, bareheiny Alexander (Redacted sender bareheiny for
DMARC) <dmarc-noreply@xxxxxxxxxxxxx> wrote:
Got the latest pre release build - seems a bit slower on the initial library
load - so I’m assuming something new is happening.
On 9/12/2019, at 01:03, Darryl L. Pierce <mcpierce@xxxxxxxxx> wrote:
You can try building it yourself. It's a PR so I'd like to get some feedback
before merging it.
On Sat, Dec 7, 2019 at 5:54 PM bareheiny <dmarc-noreply@xxxxxxxxxxxxx> wrote:
Noice.
So I just wait for a new release, or is it better to clone and compile myself?
From: Darryl L. Pierce
Sent: Sunday, 8 December 2019 11:41 AM
To: comixed-dev@xxxxxxxxxxxxx
Subject: [comixed-dev] Re: The issue of fetching the library
Yeah, you can do that. But I just put together a migration so you won't have
to. :D
On Sat, Dec 7, 2019 at 5:27 PM bareheiny Alexander
<dmarc-noreply@xxxxxxxxxxxxx> wrote:
That should be easy for me to check yes? I can just query the page hash
table for the Len if the values.
I really don’t mind re-creating my library...I need to delete a few comics
that have been replaced by trades abywats.
On 8/12/2019, at 11:11, Darryl L. Pierce <mcpierce@xxxxxxxxx> wrote:
I just created the PR for the caching ticket (#27). But, unfortunately, I
found that there was an issue with the pages table; i.e., the hashes were
being stored without padding out the hash value to 32 characters. The caching
code needs the hash to be exactly 32 characters long so it can build out the
caching directory. The directory structure takes the hash and creates three
directories: one for the first 8 characters, one for the next 8, and the
third for the next 8. Then the cache filename is the last 8 characters. So
that's going to break if your pages table contains hashes with less than 32
characters.
I can decline the PR I pushed and add a migration to pad out any existing
records so you won't have to delete anything, then resubmit the PR. I'd
rather do that than have you need to delete your data.
On Sat, Dec 7, 2019 at 4:23 PM bareheiny <dmarc-noreply@xxxxxxxxxxxxx> wrote:
My library is now loaded (hopefully for the last time)....no metadata
embedded, but CX has picked up the series etc. from the file name – where the
name makes sense.
I’ll start having a look at load times again soon – as well as doing some
scraping, as I expect the more metatdata that’s available the slower the load
will take.
I’m likely repeating myself...but I really do think caching the covers will
significantly decrease the library page load times.
Which just leaves the metdata – being loaded into the donut chart. Need to
figure out a way to measure that properly though....”I think it’s slow”
doens’t really cut it >_<
From: Darryl L. Pierce
Sent: Sunday, 8 December 2019 4:02 AM
To: comixed-dev@xxxxxxxxxxxxx
Subject: [comixed-dev] Re: The issue of fetching the library
@bareheiny - I think ultimate you're the guy I'll need to lean on to find the
optimal solution for this problem.
On Sat, Dec 7, 2019 at 9:54 AM Darryl L. Pierce <mcpierce@xxxxxxxxx> wrote:
So I'm working on re-enabling the multi-comic scraping today (the code's
nearly complete) and I'm thinking about comic selection. Specifically, how
the "Select All" button is no longer really doing that; i.e., since we don't
have the whole library in memory, there's no way to select "all". You can
only select, at best, 100 comics depending on the number of comics you
display per page.
This has me thinking that neither the current model (downloading only a page
at a time) or the previous model (downloading the full library) is the right
answer. So I'm looking for ideas or suggestions for how we can do this better.
My first idea is re-enabling the constant background update, but limit it to
only returning high-level details for the comics (id, publisher, series,
issue #, characters, teams, locations, stories) (effectively, that's
everything that makes up the collections). I thing that would go much faster
that the previous downloading of the full set of details for each comic.
On the backend, though, it still takes same amount of work to fetch the data,
but should be much faster to marshal it for the response since it's a small
number of fields (and no page details).
Any thoughts, ideas or suggestions?
--
Darryl L. Pierce <mcpierce@xxxxxxxxx>
"Le centre du monde est partout." - Blaise Pascal
"Let's try and find some point of transcendence and leap together." - Gord
Downie
--
Darryl L. Pierce <mcpierce@xxxxxxxxx>
"Le centre du monde est partout." - Blaise Pascal
"Let's try and find some point of transcendence and leap together." - Gord
Downie
--
Darryl L. Pierce <mcpierce@xxxxxxxxx>
"Le centre du monde est partout." - Blaise Pascal
"Let's try and find some point of transcendence and leap together." - Gord
Downie
--
Darryl L. Pierce <mcpierce@xxxxxxxxx>
"Le centre du monde est partout." - Blaise Pascal
"Let's try and find some point of transcendence and leap together." - Gord
Downie
--
Darryl L. Pierce <mcpierce@xxxxxxxxx>
"Le centre du monde est partout." - Blaise Pascal
"Let's try and find some point of transcendence and leap together." - Gord
Downie
--
Darryl L. Pierce <mcpierce@xxxxxxxxx>
"Le centre du monde est partout." - Blaise Pascal
"Let's try and find some point of transcendence and leap together." - Gord
Downie