Registered Member
|
Can I change not all image and colour of highlits only (white balance (edius)). Or a color of certain color value (chrominance & yuv curves (edius)).
Thanks. |
Registered Member
|
The three point balance works in three brightness ranges.
The curves plugin by definition also affects only the range where you shift the curve. The RGB color adjustment has an "alpha controlled" option, where it will only affect areas with nonzero alpha. You can use color keying to select a specific color to be adjusted. |
Registered Member
|
[b]Marko[/b]
[i]The three point balance works in three brightness ranges.[/i] Yes, it does.. [i]The curves plugin by definition also affects only the range where you shift the curve.[/i] If it's one of rgb so yes. If it's one of yuv color space so I don't think that that plugin can change higlights or midtones or shadows only. But I don't diskuss rgb colorspace for all. Is YUV Curves, isn't it. But if I'll beginning to change a "magenta" value in that one in top point so I only change color. Not luma values. yuv curves need. not rgb. rgb > "rip". I apologize, may Adobe premiere and sony vegas' users use rgb. I don't want use rgb colorspace. Even "avisynth" can use the yuv colorspace. Excuse me. |
Moderator
|
RGB and YUV both have the right to exist. I could just as well give examples why YUV is bad and RGB is good. You will never get around RGB anyway.
I currently cannot really see why you would want to have YUV curves in this case. Y is already there («Luma» component in the Curves dialog). U and V would change the color of the whole image, since there is no highlights/midtones/shadows information in these two components. Colour only. Perhaps I got this the wrong way? Simon |
Registered Member
|
2 Granjow
A. It's a very simple moment. 1. Apply chrominance filter. 2. Select in that filter a higlight point. The chrominance filter as mask for: 3. Add in the chominance filter yuv curves. (Yuv can add not only yuv curves but any filter). It's a dinamikally mask. for other effect. Also you can select not higjlit point but any value. Red, green, midtones etc. That mask is blur, because a frontier of that mask can be adjust itself. Test the canopus edius. That one has a 30 day trial period if I don't mistake. I don't want to claim that one. Just be explain. B. Where did you find a curves dialog in the kdenlive, I'm sorry. I found only: b, g, r, rgb-parade, Technicolor, Sepia, Saturator, Chroma hold, contrastor, brighteness, gamma: http://i4.fastpic.ru/big/2011/0131/72/b70a092f3a412f683eaa183e19896172.png Gentoo, kdenlive-0.7.8 media-libs/mlt-0.5.10 USE="compressed-lumas dv ffmpeg frei0r gtk kde melt python qt4 quicktime sdl sse sse2 vdpau vorbis xine xml -debug -jack -libsamplerate -lua (-mmx) -ruby" C. RGB and YUV both have the right to exist. Yes it's - but only if you can use yuv tools etc. |
Registered Member
|
On the subject of RGB effects AFAIK majority of Frei0r plugins used in Kdenlive work in RGB not YCbCr? Although both have place YCbCr has a much greater gamut and 8bit conversions to RGB as Kdenlive does uses a fraction of the gamut present in YCbCr so although both have a place I'd rather use color processing that converts to RGB at 32bit Float wide gamut defining greater YCbCr colors than 8bit integer RGB with poor precision and risk getting artefacts from clipped 'Invalid ' YCbCr conversion because of it. Also linear space composite functions would be good too.:-)
|
Registered Member
|
Color adjustment is slightly easier in UV / CrCb because you move along two orthogonal axes rather than three 120 degree axes.
But in practice, it is not that hard to think 3-axis. RGB color adjustment has a "keep luma" option, so that only chroma is affected. Select0r can work in both RGB and YUV type spaces, so it can select highlights of a specific color, if desired. Edius is a proprietary windows only software, so you will probably have a hard time persuading anybody here to install it (and windows...) just to answer a question... |
Registered Member
|
Regarding RGB vs YCbCr and 8-bit vs more/float, there are not enough contributors available making more options available. Where external options do exist, the contributions to integrate or port to MLT have not yet surfaced (and it is not a priority for me). Meanwhile, do not criticize the fact that there is a decent amount of frei0r plugins, someone decided to integrate them, and now they are available to Kdenlive users. Well, I guess one could claim their availability makes the community complacent.
|
Registered Member
|
Dan, I'm sorry you feel I was criticising Frei0r, I wasn't it would be unjust they make no claim that their plugins are high precision or for NLE use, quite the opposite.
I was responding to the assertion that RGB processing was inevitable and that there was plenty issues with 'YUV' ie YCbCr it depends on how it's treated by the host application. I think Kdenlive is great, the consistent development and the commitment of the devs including yourself of coarse in fixing bugs and adding features is excellent. |
Registered Member
|
R'G'B' vs Y'CbCr ....
When the authors decided Frei0r will be RGB only, I think that was an EXTREMELY GOOD decision. For at least two reasons: 1. If every plugin had to support several color spaces, there would be an enormous amount of code repetition and bloat. 2. RGB 444 is the undisputed king of color spaces. Both cameras and displays are RGB, so converting to anything else can only loose information. Y'CbCr has a MUCH smaller range/gamut than RGB, see: http://www.poynton.com/papers/Discreet_Logic/index.html YUV was invented in the 1950s as a means of lossy analog data compression, based on human vision properties, to squeze the color TV signal into the same bandwidth as the existing BW standard. So YUV mainly makes sense in image storage and transmission, to reduce the needed storage and bandwidth. Other than that, it really isn't that useful. So, if in the future the Frei0r specification will change, I would suggest against adding YUV type spaces with all of their zillion subsampling variants. If anything, then higher precision RGB should be considered. |
Registered Member
|
I'm not disputing RGB colour model can be greater than subsampled YCbCr no talk of analog YUV You take what I say out of context. We start with subsampled video as source generally and a cheap 8bit conversion to RGB yields less than the original video source ie it's lossy applying cheap 8bit processing ie Frei0r in a gamma encoded sRGB colourspace degrades the source further. That's why modern NLEs convert to RGB at 32bit why they do compositing in linear space and effects at 32bit float. Converting YCbCr to RGB doesnt suddenly make it equivelent to a 444 source and high quality wide gamut. It takes high precsion processing to get near to what was in the YCbCr source and that is costly. 8bit processing is known for error and accumulative degrading of the source.
I'm also not disputing certain processing is better done in RGB it's how YCbCr gets converted. |
Registered Member
|
Well, here are three things that we must not mix up:
1. Color space RGB vs Yxx 2. Spatial chroma subsampling 3. Number of bits used. RGB has bigger gamut than Yxx, subsampling has nothing to do with this. So, referring to your post from Mon, 01/31/2011 - 08:16, converting 8 bit Y'CrCb to 8 bit R'G'B' does NOT reduce the gamut. It only brings some round-off noise. Of course, you can't get real 444 from 420, I never said that. Just said that Frei0r works in 444 - therefore there it causes no spatial resolution loss on conversion. Of course, 8 bit dynamic range is quite limited (even more limited in Yxx!), but read the last sentence in my previous post. BTW, the Frei0r plugins I wrote use float internally. |
Registered Member
|
Not disputing RGB gamut.
re 8bit YCrCb to 8bit RGB was with reference to kdenlive so here's an example I'd like to query. My Canon 550D captures video to BT709 & BT1361(xvYCC) specifications. I understand xvYCC has a wider gamut than BT601 / BT709, that it's a more recent extension of BT601 / BT709 where the video levels 1 to 254 are used and the resultant gamut is much wider. So when I import my Canon files into kdenlive and they're converted to RGB and effects etc applied is that done without restriction on the wider xvYCC gamut? Am I able to export back to h264AVC with full xvYCC gamut and enjoy it on my LED TV via a hdmi 1.3 connection from a Playstaion 3? Both of which support the extended gamut? Interesting read: http://www.optics.arizona.edu/opti588/Presentation/EnrichedColorDisplay/Poynton_InforDispl_WideGamutDisplay.pdf http://books.google.com/books?id=ra1lcAwgvq4C&pg=PA256&lpg=PA256&dq=poynton%2Bcharles%2Bwide%2Bgamut%2Bdisplays&source=bl&ots=bOn8EHYZ88&sig=cXOWmf_AMH34GjG_y4aCZWhGbjc&hl=en&ei=beNJTa6ZKsmY8QPe7OybDw&sa=X&oi=book_result&ct=result&resnum=2&ved=0CBwQ6AEwAQ#v=onepage&q&f=false Another query do you think it necessary to convert video to RGB and back to YCC in order to just adjust some brightness levels for example with a Frei0r plugin or would that have been easier and less damaging done staying in YCC? If I'd imported a interlaced video and wished to just adjust some levels, do you think it necessary to deinterlace (required to convert to RGB) in order to apply a levels adjustment and then reinterlace at export? Would that be easier and less damaging done in YCC? I think there's perfectly good reasons when not to convert to RGB |
Registered Member
|
Well, you should have mentioned earlier in the debate that you are talking about wide gamut / xyYCC. I thought we are discussing various color representations within 709/sRGB, and in this context I said that converting to RGB will not cause much image degradation (never said you HAVE to convert to RGB).
Of course, you have the right to enjoy your wide gamut display. But I have the right to some skepticism, too. And a gut feeling that wide gamut is more of a marketing trick than a real need. If you ever watched a vectorscope, you probably saw that the blob is mostly close to the center, far from covering the 709 gamut. I have more often seen too much saturation in video than too little. ("wide gamut" is wider in the sense of higher saturation). Fig 1. in the fisrt of the articles you have linked shows that very well - even with the macbeth color checker, that is designed to push the color process, only a few colors just barely jump the triangle. So, I side with Dan here, considering these things to be low priority. Also, to cite Poynton from the paper you linked: "The recently adopted HDMI 1.3 standard for digital interface to displays has provision for xvYCC-encoded signals. However, no gamutmapping algorithms are standardized, and no guidelines for gamut mapping are provided." and: "However, introducing wide gamut to an open-systems environment is an order of magnitude more difficult than for a closed system. The challenge is to decide where and how gamut mapping should be performed." also: "Proprietary gamut-mapping strategies are built into the consumer-electronics equipment, and different manufacturers are likely to produce their own gamut maps." There I rest my case :-) |
Registered Member
|
There we are then a YCbCr source with a wider gamut than the RGB representation in Kdenlive which was my initial comment on this thread. I also made reference in that comment to artifacts introduced in such conversions to RGB where the gamut mapping of a wider YCbCr space is done heavy handedly by just clipping so called illegal colors rather than doing it perceptually or with manual intervention by 3D LUT for example.
That is what I was saying before it was taken out of context. :-) And if we look to modern NLEs like Adobe CS5, yes all RGB processing but it is done at high precision and wide gamut. And before anyone gets the wrong end of the stick again I'm not criticising Kdenlive or dev focus my initial comment was to put Granjows comment into the context of Kdenlive not RGB vs YCbCr nonsense. :-) A guy who's spoken many times of extended space is Shane Hurlbut ASC. http://www.hurlbutvisuals.com/blog/2010/12/02/in-praise-of-dissent-adobe-cs5-paves-the-way/ A quote from Shane. I contacted the Adobe CS5 guru Mike Kanfer... "Yes, H.264 is definitely not considered a finishing codec, but to be clear, Premiere Pro does not use it in that way. The H.264 is read natively by Premiere and once it is decoded into the app. it “resides” internally in a 32 bit float extended color space that is unmatched for color fidelity and dynamic range. Your tests at Laser Pacific have proven that." ...longer explanation below: "Adobe CS5 reads the H.264 files natively into Premiere Pro and After Effects at the highest possible quality. Our color gamut and dynamic range for tonal detail from shadow to highlight is unsurpassed. There is even support for over-brights beyond 100% in After Effects. i.e. in plain English, we squeeze more out of these files than anything else out there! Shane Hurlbut’s filmout tests at Laser Pacific have verified that our interpretation of the H.264 is the smoothest and most filmic representation available. The magic comes from the use of proprietary interpretation algorithms and I might also mention that we bypass QuickTime for this process, which avoids the whole gamma conundrum. Once the file is living inside our apps on the timeline or project, we deal with the image information at the 32 bit float level. Now that is not saying we can make an 8 bit H.264 DSLR video capture look like perfectly shot IMAX footage scanned at 16 bits, but what we do offer up is the ability to edit, apply effects and color corrections within our apps. at an unprecedented level of quality." |
Registered users: Bing [Bot], blue_bullet, Google [Bot], Yahoo [Bot]