Registered Member
|
The last monologue….epilogue, you might say.
Well, I managed to source a few more raw clips from various DLSR’s. Testing these in addition to AVCHD clips I already had on file, I’m in a better position to arrive at some firm conclusions. The clips tested were: Recording full YUV luma range (0-255): Nikon AW120 1080/29.970p AVC.mov Nikon 7000 1080/23.976p AVC.mov Canon EOSM3 1080/25p AVC.mp4 Canon EOS6D 1080/25p AVC.mov Canon EOS70D 1080/29.97p AVC.mov Recording with 16-255 YUV luma range: Canon HF-G10 1080/29.970PF AVCHD.mts Canon HF-G30 1080/50p, AVC.mp4 Panasonic TM900 1080/59.940p AVCHD.mts Panasonic Lumix G6 1080/50i AVCHD.mts Panasonic FZ330 1080/50p AVC.mp4 Edit: Also applies to HDV.m2t clips off Canon HV30 and HV20 camcorders Obviously (thankfully) I’m not going to post the individual test results and frame shots, suffice it to say that: All of the “full-scale” clips behaved exactly the same as the Nikon AW120 clip tested before. And all were recognized in the Clip Properties profile (metadata) as having a yuvj420p pixel format. All of the clips displaying a 16-255 luma range behaved exactly the same way as the Canon HF-G10 AVCHD mts clip tested before. And all were recognized as having a yuv420p pixel format in the Clip Properties. On that basis, my conclusion is that behaviours I reported in the preceding posts with the Canon HF-G10 and Nikon AW120 clips are typical and what may be expected with HD AVC clips from other brand camcorders and DSLR’s recording in either of these two luma scaling modes. In summary , the expected behaviours are as follows: When editing native clips recorded with a16-255 luma range profile - the original scaling is preserved when rendering out to any of the YUY2 and YV12 output formats, except where an “RGB requiring” effect/transition is applied, in which case the luma of those frames affected will be clamped (clipped,limited) to 16-235. The same outcome is expected when the native clips are first transcoded to DnxHD or Matroska.mkv (HuffYUV) using the pre-set profiles and/or to other acceptable lossless intra-frame formats like FFV1 and UTVideo using FFMPEG (CLI) When editing native clips with the full scale (0-255) luma range, it is expected that the renders will have a compressed 16-235 luma range, and irrespective of whether an effect is applied or not. Transcodes created with the pre-set profiles will also have a compressed 16-235 range, but will suffer no further compression when they are processed, so the outcome is the same. Same will occur if FFMPEG (CLI) is used to generate the transcodes. Whether desirable or not, that is the expected behavior. Something to bear in mind also if generating transcodes for use in other applications. Luma “banding” artifacts may also arise as by-product of the compression scaling. As regards the “Full Luma Range” option in KDenLive. Frankly, I could find no case at all for applying it to the native clips or the transcodes. In fact, when applied in this context, the outcomes are less than desirable, specifically: Applying the “Full Range Luma” option to native 16-255 range clips and their transcodes (which preserve the 16-255 scaling) will result in (YUY2 /YV12 format) renders where the luma is compressed to 32-235, and irrespective of whether an Effect is applied or not. Not desirable. Applying the “Full Luma Range” option to native full-scale 0-255 clips will have a null effect (as it would appear) but applied to the pre-set transcode formats (already compressed to 16-235), it will result in further compression to around 32-230, irrespective of whether an Effect is applied or not. Not at all desirable. The only possible case I can see for applying the “Full Luma Range” flag is for input clips that it is CERTAIN have full range (0-255) scaling but which are reported to have a yuv420p or yuv422p pixel format in the Clip Properties profile. And in my tests this only applied to transcodes created by two atypical methods, namely: Native “full scale” clips encoded with FFMPEG to RawYV12 or RawYUY2 with the output pix_format set to yuvj420p or yuvj422p. Despite preserving the full scale luma, these transcodes were declared as having yuv420p or yuv422p pixel formats in the Clip Properties. The same could also be applied to FFMPEG-encoded x264 given that libx264 now supports the full scale pix_format flags, but I have not tested this extensively. And the second case - “full scale” clips transcoded to lossless formats such as HuffYuv, FFV1 and UTVideo (in YV12 or YUY2 colorspace) using Video-For-Windows (VFW) codecs. These codecs preserve the full scale luma as they merely replicate the scaling of the input, unless specifically configured to do otherwise. Again, the pixel format of these transcodes is declared as yuv420p or yuv422p despite being full scale. Applying the “Full Luma Range” option in these cases results in compression to 16-235, and so may be considered a way of forcing the same outcome that is ‘expected’ when native 0-255 range clips are processed with the “Full Luma Range” setting OFF (as it is by default). Leaving the “Full Luma Range” setting off in these cases, however, presents another outcome option, namely - the full scale luma is preserved, except where an “RGB requiring” effect is applied, in which case the luma of the affected frames is clamped to 16-235. This may be desirable for “Pass-through”, straight trim/cut-editing or else in when effects are applied to an entire clip or series of clips on the time-line and it is desirable to maintain the original luma gradient, at the expense of clamping. So each has it’s pros and cons. Granted, these tests did not include native clips from a Sony model and if anyone would like to point to a source of a representative raw clip for download I will be happy to look at it and see if it conforms to the same behaviour. And it goes without saying that if anyone has observed different behaviours with their video sources, it would be interesting to know more. I've nearly finished going through the KdenLive Effects and Transitions to distinguish those that require RGB (the majority) and those that appear to be performed in YUV colorspace (some).
Last edited by Inapickle on Mon Feb 01, 2016 8:54 pm, edited 2 times in total.
|
Registered Member
|
Thank you very much for the detail work and the summary. This is heavy stuff, I will need some time (and coffee) to let this trickle down into my brain.
I want to try to check the H.264 footage I record using an HDMI recorder in 1080p30 with respect to its recorded luma range. How could I do this without having to resort to Wine and Win-Tools? Did you find a way to gather such information using Linux video tools only? PS: I own the HF G30 and it works very well for me. Especially as I came from GoPro HD Heros I find Canon's **** SD card handling totally reliable, in contrast to GoPro that killed several FATs during filming. Of course, DSLRs are now very powerful ... but I'm unclear as to their user interfaces? Is filming with a DSLR as convenient as with a dedicated video cam, such as the HF G30? What is your experience? |
Registered Member
|
Thanks. Well I must admit it was a bit of a chore, but I felt I owed it to myself and anyone who had taken the time to wade through my preceding posts to come to some reasonable evidence-based conclusions. Hence the entry I added to the first post - "OR SKIP TO POST #31 FOR MY SUMMARY CONCLUSIONS IF YOU DON'T WANT TO PLOD THROUGH THE ENTIRE MEANDERING VOYAGE OF DISCOVERY"
I'm afraid not as yet, but this study now out of the way, it's something I mean to revisit. Thing is, that to provide comparable information to the AVISynth YUV Histogram, any alternative tool/method has to take the raw YUV output from the decoder, unbiased by any flags that could cloud the picture. If I could at least get VDub working in Wine it might be a start. And I'm not really inclined to venture into the "running Windows in a WM" domain as an alternate "solution". Just as easily dual-boot for now. Again, I might inquire over in the Doom9 forum, as there's a lot of cross-platform expertise there, and people far brighter than me. Meanwhile, as regards your HDMI captures, if you want to get some samples across to me I'd be happy to examine them. Do you have any raw GoPro clips also?
Great camcorder; resolves all the design quirks of the HF-G10/G20 (no cold accessory shoe for one) plus the mp4 and 50/60p options. Do/have you taken the HF-G30 down on dives by the way? When I was considering possible options for weather-proofing my HF-G10, I had a very brief look at housings, until I saw the prices - yikes ! Used to scuba a fair bit myself in the past when I was working abroad - but that was before I got into videography. That started when the twins were born....and the diving waned.
Out of interest, was that related to over-heating issues ?
I've never used one. That said with the astounding 4K video images I'm seeing coming off quite reasonably priced models, it's quite alluring. My tendency though is to wait until such (relatively) new modalities have matured some, before I jump in. Just one other thing while I'm thinking about it- what I found a bit frustrating when running these tests was finding that in order to refresh the Clip Properties metadata for a newly added clip, I had to exit from KDenLive first; otherwise it retained the metadata from the last one. I picked up on that from the outset, but I'm thinking that if an unsuspecting user reads these pages and acts on the recommendations I made in the summary, they could find themselves wrongly interpreting their footage if they are not aware of this - i.e. the declared pixel format in the Clip Properties profile is a key factor in predicting the behavior. Also regarding the description of the "Full Luma Range" option given in the KDenLive Manual: https://userbase.kde.org/Kdenlive/Manual/Full_Luma/en Having done these tests, I'm now a lot clearer about what it all means, but I will stick my neck out regarding some of the statements there - which are what started all of this inquiry: Quote: "By setting the full luma on, which should only be done for camera sources known to be full range 0 - 255 or even 16 - 255 such as FS100, Nex5, HV20, HV30 and probably many more consumer cameras" I don't know about the FS100 and Nex5, but for those camcorder/DSLR clips I tested that record with a luma 16-235 range, applying the "Full Luma Range" option would definitely not be my recommendation. Edit: And I've Just tested a couple of native HDV m2t clips that I had kept from my old HV30 (30PF) and HV20 (25P) and they both behaved in exactly the same way as the other 16-255 range AVC clips I tested. And both were reported to have a yuv420p pixel format in Clip Properties, so no surprises there. And then, quote: "Canon and Nikon DSLR's too but a little more complicated, we can export video with the levels as imported, BUT and it's a big but, that is without doing any RGB operations in Kdenlive, so no effects, color correction etc. If any effects are added then the output will be restricted range again" Still not exactly sure whether this is referring to recordings made with full scale 0-255 or not, but from the last sentence I'm assuming yes. All those clips I tested off the Canon and Nikon DSLR's were full scale. And there the only way to "export video with the levels as imported" (taking that to imply full scale) was by using these 'atypical' transcode methods (FFMPEG encoded RawYV12/YUY2 or VFW encoded lossless formats). And yes, in that case, when "RGB requiring" effects (as I call them) are applied, that results in clamping to 16-235. But that was only when the "Full Luma Range" option was left OFF. When switched ON, the renders from these "atypical" transcodes, reverted to the compressed 16-235 outcome, irrespective of whether an effect was applied or not. Whether there are other recording modalities on Canon and Nikon DSLR's that behave differently, I'm not sure, but I do find those statements rather misleading in the light of what I discovered.
Last edited by Inapickle on Mon Feb 01, 2016 6:17 pm, edited 2 times in total.
|
Registered Member
|
Yes, I've used the HF G30 during some dives with an Ikelite dive housing. While picture and zoom quality together with the five axis deshaker of the HF G30 is really great, overall handling of the dive housing is not as pleasant as with my tiny three HD Hero3 rig. You can see three or so films I've taken with the HF G30 in my YouTube channel; the description lists the cam that was used. The shots taken in the Green Lake in Austria would have been impossible with the HD Hero, we were around 30m and more away from fish ... so bouyancy control within only a few ten centimeters to not touch the ground (never touch ground in Green Lake!) was really and keep the fish in distance was not easy.
Overheating was not an issue for my HD Hero 3 as I primarily dived in cold water, around 4 to 8 °C. Still, the small surface area of the HD Hero 3 caused my a lot of fogging headaches, even with painfully careful drying and installation procedures. Battery runtime was much too short, especially as I need the LCD backpac for correct framing. And no, FAT issues seem rather coming from issues with power management and the mSD controller. |
Registered Member
|
Interesting. Sub-aqua videography must present some real challenges dealing with fluctuating luminance levels, both in-camera and in post. I can well appreciate why that implementation of keyframable opacity control would be of great value to you:
viewtopic.php?f=270&t=116876 |
Registered Member
|
Yes, I have found a solution - using VapourSynth, which has it's own version of the AVIsynth Histogram filter that I used in the tests. So all quite do-able in Linux and not involving Wine in any way. I'm just working through some niggles with the scripting to get it to open directly in mpv media player. All new to me, but I'm getting there. Once set-up it should be just a question of pasting the path of the video in question into the script. I'll post the method when I'm done. |
Registered Member
|
Oh, there's a simple rule to get good shots: you bring in the camera, you bring in the light. That's why I have two 2000lm LED dive lights on my rig. Of course, far shots such as in the Green Lake work only when where is some about of light from above. Automatic brightness control actually works quite decent, so within a scene there's mostly only static correction required. The HF G30 turned out to be a fine low-light beast, but you need to manually clamp the automatic gain control. That and manual WB setting was the most important spec requirements that ruled out semipro Sony and most other digi cams very early; basically leaving on the HF Gxx series from Canon. Or GoPro HD Hero 3 and later in Protune with white balance completely off. |
Registered Member
|
Inapickle, maybe you could ask the MLT developers to implement a YUV graph effect? They also have the RGB graph effect. If I'm correctly understanding then this would allow Kdenlive to use this MLT effect to implement an YUV pane, similar to the RGB parade, et cetera.
As for the luma range: I'm interested in rendering H.264 YUV footage such that all scenes are consistently in the same luma range, preferably 16-235. I would like to get this range irrespectively of whether source clips are 0-255, 16-255, or 32-255 and regardless of whether I had applied RGB-based effects or transitions. Is this already possible? And which buttons do I need to push? |
Registered Member
|
I'll have to come back to you on these points - I'm a bit tied up this evening.
Just a quick question - are those 32-255 scale sources native camcorder/camera video clips or renders/transcodes where the black point has got shifted-up somehow ? |
Registered Member
|
Some are native clips not touched. But some ... would changing the frame time points in ffmpeg compress the Luma range?
|
Registered Member
|
I really don't know; I've never done anything like that in FFMPEG. It's difficult to imagine how the scale would end up at 32-255, as compression or clamping would surely bring the whites down to 235 in any case. If you could post a typical FFMPEG coding string that produced such a result, I could try and replicate it and maybe figure out what's gone on. If, as is the more likely case, the black-point has got scaled (i.e. compressed) up to 32, there might be some scope for bringing it down again, but if clamped by an offset, there's nothing you can do really. I'll be posting the VapourSynth method for the YUV Histogram shortly, if you want to look into that yourself. I'll address the comment you made about the prospect of a YUV (Parade-like) WFM at the same time.
Of course in KDenLive there are number of effects that could well shift the black and/or white points. That's what made it a bit difficult when I was going through the effects to distinguish those that "require RGB" and those that don't, based purely on whether the 16-255 scale of the test clip (HF-G10 mts) gets clamped to 16-235 or not. And, with some of the more involved effects, I didn't really get into the finer points of control - merely nudged the parameters just enough to get an effect. And for some, merely applying the default effect altered the luma profile. Case in point: Artistic Grain: http://i.imgur.com/OvtHID3.jpg Artistic Old Film: http://i.imgur.com/plgXTFa.jpg Probably as good a time as any to mention the KDenLive effects/transitions that, from my testing, appear to be performed in YUV space and so do not require RGB: Transitions: Wipe, Slide and Dissolve Color: Technicolor, Sepia, Greyscale, Chroma Hold - these effects only affect the YUV chroma (UV values), not the Luma. Color Correction: Gamma Distort: Mirror, Wave Blur: Auto-Mask Artistic: Dust, Scratchlines, Sobel . Sobel is merely an inverted edge mask, so the original YUV comes through on the edges. Motion: Speed, Freeze Obviously where you have effects like Fade To/From Black the scale points shift as soon as the black background appears in frame. There are a couple of the more complex effects that I need to look at again more closely - Dynamic Text, VStab etc. |
Registered Member
|
For instance,
|
Registered Member
|
I've just tested that ffmpeg command with some native 0-255 and 16-255 scale clips and in both cases the YUV profiles were passed-thru unchanged. Same thing with the Motion: Speed and Freeze effects in KdenLive. So there must be some other cause.
|
Registered Member
|
Behold the mighty method for generating the VapourSynth equivalent of the YUV Histogram I used in the above tests. Could have got there quicker and saved myself some grief had I discovered that there is a repository with (nearly) all of the required packages, but such is the life:
METHOD FOR GENERATING YUV HISTOGRAM USING VAPOURSYNTH Tested in Kubuntu 15.10 (AMD 64) VAPOURSYNTH SET-UP 1. GET REQUIRED PACKAGES Add this ppa to software sources:
From here: https://launchpad.net/~djcj/+archive/ubuntu/vapoursynth After refreshing you'll likely get a system update prompt for libass (who thinks up these names?) 2. INSTALL PACKAGES I used Synaptic. Install: Zimg: http://i.imgur.com/zJAy9Ed.png ImageMagick: http://i.imgur.com/UgsMWB0.png or
http://i.imgur.com/fAnEmNa.png MPV Player - this is a special version with built-in VapourSynth support: http://i.imgur.com/P1KOwxJ.png 3. TEST VAPOURSYNTH
Should return the version information Exit() to exit from Python 4. EXPORT THE ENVIRONMENT VARIABLES AND LOAD VAPOURSYNTH EDITOR
CREATE AND VIEW THE VAPOURSYNTH SCRIPT The VapourSynth Editor GUI should also now appear in Applications under Utilities so you should be able to access it from there. Open the VapourSynth Editor and paste this script into the edit field:
The first two lines of the script should autoload by default, so avoid duplicating. Edit the clip = core.ffms2.Source("Test.mov") line to set the path and name of the video clip being tested accordingly: Example: http://i.imgur.com/DPdjPzh.png Click on the Script tab > Preview to open up the script video output: http://i.imgur.com/iC6LzG7.png Adjust the scaling as needed – Fit to Frame, No Zoom, Fixed ratio etc. And you could also adjust the dimensions of the script video output with the line:
Delete the # to un-hide that line and adjust the width and height (the respective numbers in brackets) as desired. To change from the YUV Histogram view to the Luma WFM view, replace Levels with Classic:
http://i.imgur.com/nrCnny6.png When you are happy with the script, save it (File>Save Script As) as a .vpy document wherever. You can now open and edit the script as you wish. Note: when you open VapourSynth Editor you may well see these warnings in red the log section: http://i.imgur.com/eZgnQgX.png It phased me too; don't worry about it. When you open the Preview the required plugins will auto-load and after closing Preview the warnings should disappear. VIEWING IN MPV PLAYER The script can also be piped to open and play in mpv player directly. Terminal command:
Set the name and path of the script.vpy accordingly. That's it. Edit: Regarding the example I posted: http://i.imgur.com/iC6LzG7.png It might be noted the frame rate in the Preview is reported as being 59.9401 fps. That's because this mts clip (off my Canon HF-G10) is recorded in 1080/30PF format i.e. Progressive Segmented Frames (PsF); progressive but with an interlaced (field based) h264 picture coding structure, and so 'flagged' as interlaced - purely to comply with Blu-ray/AVCHD standards. A quirk of the ffms2 source filter is that it will output from these clips as 59.9401 fps. These are not bob de-interlaced frames as such, but frame duplicates. And to avoid that the 'correct' fps (29.970) has to be specified by setting the fpsnumerator and fpsdenominator parameters accordingly (30000/1001= 29.970)
The equivalent ffms2 filter for AVISynth behaves the same way. It's one of the reasons why I use another AVISynth filter (DGDecIM) for loading these 1080/30PF.mts clips which doesn't do that. Unfortunately, there's no equivalent for VaporSynth. Once the native 1080/30PF.mts have been "correctly" transcoded as progressive frames to another format however this is not an issue. Another problem that can arise when editing the 1080/30PF.mts clips is if the NLE incorrectly interprets the YV12 chroma subsampling pattern as interlaced, which it's not. And this can result in "chroma upsampling error" artifacts (chroma mice-teeth/jaggies) when the YV12 is converted to RGB and back. The NLE I used in Windows (Corel VideoSudio Pro) does just that - so I was never happy using it for editing native clips and used a lossless intermediate format (UTVideo, MagicYUV) instead. Fortunately, KDenLive does not appear to suffer from this issue making native editing of these clips viable. Of course with your HF-G30, TheDiveO, you have the option of recording "native progressive" HD AVC as mp4, which avoids all of these issues. Just something to be aware of though for anyone who records in these "PsF" formats.
Last edited by Inapickle on Thu Feb 11, 2016 12:09 am, edited 1 time in total.
|
Registered Member
|
Inapickle, thank you very much! I followed your instructions on a Kubuntu 15.10 installation and got everything up and running. I then took a look at my H.264 HDMI recorder footage (in .mp4 container) in its original and after run through ffmpeg for speedup. If I'm correctly interpreting the sidebars in the history bars to be the safeguards, then in both the original recording as well as the ffmpeg-processed one the luma range is 16-235. I also get fps correctly as 29.97 and format as YUV420P8.
Yeah, support for H.264 in .mp4 was a high priority for me for easy work, as I had already done a lot with GoPro H.264 in .mp4 without much problems. So I wanted to rather stay away from .mts containers. Now if this we could only get a YUV histogram as another scope in Kdenlive! |
Registered users: Bing [Bot], blue_bullet, Google [Bot], Yahoo [Bot]