This forum has been archived. All content is frozen. Please use KDE Discuss instead.

Bit depth and color space conversion woes

Tags: None
(comma "," separated)
Stefano Bonicatti
KDE Developer
Posts
16
Karma
0
OS
Hello there, i'm Smjert on irc channel, i was trying to understand why color conversion takes so much time (when for instance we change bit depth from Image -> Properties).
One answer is that it's a single thread process and this is ok.. i can probably "fix it", but what i don't understand are all the conversions that take place.

Starting from an rgb 8bit image and changing the bit depth to 16 bit float, for instance, starts a first conversion from RGBA U8 to RGBA U8 with HighQuality/BlackpointCompensation conversion flags (and from the number of the pixels it's clearly processing all the image).
After that a conversion from RGBA U8 to RGBA 16F takes place (the one i would expect), with BlackpointCompensation only.
Then a conversion from RGBA 16F to RGBA U8 with again HighQuality/BlackpointCompensation.

All of these 3 works on the entire image, plus the first and the third are done by updateCache in KisOpenglImageTextures (while having OpenGL active), the second one is done by some other path i don't recall.

The same happens if one does Convert Image Color Space (Sidenote: after the conversion the autoselected bit depth in there is still 8 bit, shouldn't be the current one?).

So i don't get why there are all of these conversions, shouldn't be one sufficient?
At first i thought that the third conversion happened because OpenGL didn't support RGBA 16F format but as far as i found it can.. and the first one too i don't explain it.
User avatar
halla
KDE Developer
Posts
5092
Karma
20
OS
I'd need to dig in and check what's going on... The last one to really touch the color conversion code was Dmitry. Keep in mind though that there are two things going on here:

* the conversion of the image and layer colorspace to a new one
* the conversion of the rendered image to something the gpu can display

What I think, but haven't checked, is going on is that the after pressing okay in the convert dialog, the display code starts updating the textures, then the image is being converted, maybe even in threads in the background and when that's done, the display code needs to convert the pixels again. The first conversion probably is superfluous.
Stefano Bonicatti
KDE Developer
Posts
16
Karma
0
OS
Thank you for answering.

Now i do think though something else is happening too.

What i know is that the first requested conversion (U8 to U8 with HighQuality flag), since only the conversion flags changes, doesn't call the actual conversion/transform on line 243, KoColorSpace.cpp but the above memcpy is used. So probably quicker but unnecessary.
Now for the third conversion, that should be the one made for the OpenGL format, there's no reason to me that it should take a 16F and convert it to a U8 since the card supports float types.
According to this (and unless Qt uses totally another thing in the underlying code) one can pass a RGBA_16F image and if the videocard doesn't support that specific format (GL_HALF_FLOAT) it wil be converted (by OpenGL) to a GL_FLOAT, that means RGBA_32F, otherwise if one has GL_ARB_half_float_pixel extension (that i have), the conversion shouldn't happen.

Now looking where the third conversion is being started in Krita code, i see at line 254 in kis_opengl_image_textures.cpp that the destination color space is taken from m_tilesDestinationColorSpace that it is set (to something else than null) only inside updateTextureFormat(), at line 511 same file and the last time i debugged i didn't see it hit the breakpoint after having changed the bit depth.

So long story short updateTextureFormat() is not called when one changes the bitDepth.. and i think it should be called so that the OpenGL texture updates its internal format/color space, to avoid unnecessary conversions.
Stefano Bonicatti
KDE Developer
Posts
16
Karma
0
OS
I've researched a bit more the code, but some questions remains.
updateTextureFormat() is not called, as i said, when the bit depth changes, so the opengl texture keeps being at 8bit (or basicly the original bit depth of the image).
If we call updateTextureFormat (since the function is private) inside updateCache it gets the correct bit depth if we changed it to 16bit float, though a conversion still happens because the default profile selected is different.
This is actually ok, so in the end the third conversion will happens anyway but between the same colorspaces and different profiles.
The problem though (and here unfortunately i don't know much about color conversion) is that when it converts the opengl texture from 16F sRgb built-in to 16F scRGB(linear), or even between the same profiles, where then there should be no actual conversion, the image gets darker.

I still think though that it make sense to update the texture format when the bit depth updates..

Edit: Forgot to say that i also tried to see where i could add some speed (namely threads), but what calls the conversion is a mergeJob from a fullRefreshAsync, that does all sort of things (not all i've understood), but i get that it re-applies all the layers etc.
I do think it is a bit unnecessary, and also due to all the (possible) side effects it's difficult to make it parallel.
A bit depth change (especially since the one i'm talking about changes only the rendered image) should be just a matter of taking the image data, split it and feed them to some thread to convert.
I'll try to do something like that but need first to understand better how stroke/update/spontaneous/merge jobs works to try to reuse the scheduler instead of spawning my threads.


Bookmarks



Who is online

Registered users: bancha, Bing [Bot], Evergrowing, Google [Bot], Sogou [Bot]