Hello Graeme On 27-Jan-2011, Graeme Gill wrote: > collink is a bit simpler than colprof. It only creates one grid rather > than 3, and only has to cover the real device gamut, rather than the > L*a*b* cube. The other factor to take into account is the inversion > caching. The effectiveness of the caching depends on the ratio of > sampling density of output table to input. So the smaller the resolution > of the A2B of the output profile, and the higher the resolution of > the device link, the more effective the cache gets. The more memory > you have, the bigger the cache and the more effective it gets. So > the scaling may not be linear, and will depend a lot on the quality > flag you set when creating the profile, the machine you are using, etc. > > Here's some benchmarks for a different profile using collink: > > res. Time > 17 5 mins > 33 25 mins > 100 25 Hours (estimated) > 255 > 1 Year (estimated. It hadn't reached 0.01% after an hour) Anyway I don't explain the experienced fastness of collink -r100, while on the contrary I could never generate even just for tests a profile with colprof -qu (still computing after one day), and -qh alone takes 1 hour or more. On the machine used for profiling I have 2 gigs ram and a not too recent PIV dual core @3.2 Ghz, ARGYLL_REV_ACC_GRID_RES_MUL = 1.5 and ARGYLL_REV_CACHE_MULT =1.6. Well, I'm surely happy of that, since device links for rgb->cmyk conversions will be my definitive choice so far :) > Possibly. It would be interesting to identify three points: two either > side of a discontinuity, and one right inside it, and then explore them > using xicc -fif. Is the CMYK value returned in fact the only one possible > that matches the Lab value and has the closest K to the targetr, or is some > other combination with a more compatible K being ignored ? (The situation has > arisen in the past where the possible K levels become bifurcated, but the > inversion algorithm has not picked the one with K closes to the target level, > due to numerical errors.) It's somehow what I'm struggling on. I should investigate, better if in a graphic fashion, the topology of discontinue areas (i.e groups of offending grid points). There's to understand if the discontinuities tend to be relative to all three dimensions (so having the topology of a ball) or just in one or two dimensions (for example the topology of a flyover). The first would be the easiest case, because fixing the offending nodes (choosing alternate ones from bifurcations) completely fixes the discontinuity. But the second case would be troublesome, since fixing the discontinuity in one direction could introduce a discontinuity in another hortogonal direction! I hope this will never be the case but again, I have to investigate. In any case, I think that a quick and efficient method to detect when you're generating an "offending node" in the table inversion [quick because you don't have to check for curves continuity and derivatives] is to simply try on the fly a linear interpolation at 50% between the new node and the previously generated nodes around, and check the Lab value of this interpolated CMYK combination using the A2B table. If this Lab value doesn't lay between the Lab values of nodes around around (i.e interpolation would show artifacts here) you know you're start generating an offending node or sequence of nodes - you know you're on a boundary of a problematic zone. This is a check I would really do since it tells many things. Of course, if the boundary is not a closed shape, there's no way to detect which are the right nodes and which the "wrong" ones! > Memory consumption may still be an issue. > The limit is actually 255. I can change this for collink easily enough. I couldn't find such a statement on the ICC docs, I noticed the grid resolution is coded in a 16 bit number so there should not be that limit, but maybe I overlooked at it. 255 would still create mismatches with 8 bit per channel unsigned, it should be 256 /&