Capture card vs screen recording software
|
Raven Dmytryk
Registered User
Join date: 29 Jan 2006
Posts: 1
|
01-12-2008 05:49
I'll be building a new gaming rig soon and would like to start playing in machinma land, but am debating whether to use screen capture software (ala fraps) or to turn a spare PC into a dedicated video capturing & processing system.
As I see it with the video capture card method, I will:
- avoid bogging my main system down without additional software running (though fraps is very forgiving) - have less micromanagement of recording sessions. - record straight to mpg4, thus saving space and post processing time
Any negatives to add with capture cards or bonuses to using screen recording software?
|
Geuis Dassin
Filming Path creator
Join date: 3 May 2006
Posts: 565
|
01-12-2008 13:32
I don't know enough about capture cards to say if its good or bad. However keep in mind that if you are compressing you raw video right out of the gate, you decrease the quality of your final movie after editing and final export.
|
AWM Mars
Scarey Dude :¬)
Join date: 10 Apr 2004
Posts: 3,398
|
01-14-2008 06:25
IMHO Capture cards are useful for external connections (Digital camcorders, tv/dvd players usually via firewire or s-video), they have no special ability for data rendered via the host system, beyond any existing graphics card. All that data arrives/goes via the PCI bus.
Fraps like others, can either capture the whole screen, or a propotion of the screen (region), via the memory address registers in video ram memory where it is mapped. That is shared via the bus pipelines, between the screen outlet and the PCI bus, taking it back to the storage device.
Capturing using any form of codec will automatically degrade the data, this is not like a zip file (what you compress in, comes out intact). If you then apply transitions and masks to that data in the editing programme, will for the most part, find it hard to create a smooth and useful mask and some can fail.
In my experience, start with raw data, edit and apply masks, transitions etc, then begin work reducing the media to an acceptable level of size (this may require re-rendering several times from each process..) then, apply your final codec and format for delivery and decompression. Over zealous use of codec at any one or more stages, will have a marked effect on the final product, degrading it too fast to quickly.
_____________________
*** Politeness is priceless when received, cost nothing to own or give, yet many cannot afford - Why do you only see typo's AFTER you have clicked submit? ** http://www.wba-advertising.com http://www.nex-core-mm.com http://www.eml-entertainments.com http://www.v-innovate.com
|
Vlad Bjornson
Virtual Gardener
Join date: 11 Nov 2005
Posts: 650
|
01-14-2008 08:07
I experimented with capturing video via the video out (s-video) of my video card to my mini-dv camera. I was disappointed with the results. All mid to high level gear but the resulting video was not as sharp as a capture using Fraps. Sort of like the difference between HD and standard signal, I suppose.
I'd say if you are building a new gaming rig then it should be powerful enough to handle Fraps and SL at the same time. My system is just mid range (2 ghz single-core CPU, 1 Gig Ram, nVidia 7800 GT) and my frame rate is pretty much the same with or without Fraps recording.
_____________________
I heart shiny ! http://www.shiny-life.com
|
Geuis Dassin
Filming Path creator
Join date: 3 May 2006
Posts: 565
|
01-14-2008 10:06
hmm that's interesting AWM. I wonder if its programatically possible to compress video data to a zip file while recording, then unzip it afterwords for full data recovery.
|
AWM Mars
Scarey Dude :¬)
Join date: 10 Apr 2004
Posts: 3,398
|
01-14-2008 16:01
From: Geuis Dassin hmm that's interesting AWM. I wonder if its programatically possible to compress video data to a zip file while recording, then unzip it afterwords for full data recovery. My guess you would not be able to do it on the fly, zip file creations take a lot of cpu time, not to mention cycles from the data bus. The most successful way I found, is to install seperate SATA II raid controllers, running SL from one, the OS from another, and save raw data to a third. Keeping caches from each element on their own HD set, helps also. The only limitation then is the shared data bus. I don't know the full specs of firewire, but maybe capturing on one system, then link another system via firewire for saving, may work, at least therorectically, you are at least not taking bus cycles, only a cpu interupt. As I say, never experiemented with that.
_____________________
*** Politeness is priceless when received, cost nothing to own or give, yet many cannot afford - Why do you only see typo's AFTER you have clicked submit? ** http://www.wba-advertising.com http://www.nex-core-mm.com http://www.eml-entertainments.com http://www.v-innovate.com
|
Mixin Pixel
Registered User
Join date: 24 Jan 2006
Posts: 6
|
Hdmi
04-27-2008 22:17
What if you capture to a HD camera via HDMI output from 1080P? Anyone tried that?
|
AWM Mars
Scarey Dude :¬)
Join date: 10 Apr 2004
Posts: 3,398
|
04-28-2008 02:00
Again, my opinion would be leaning towards slow data capture rate causing bottle necks, you would still have to cache the stream stealing CPU cycles and HD controller focus.
_____________________
*** Politeness is priceless when received, cost nothing to own or give, yet many cannot afford - Why do you only see typo's AFTER you have clicked submit? ** http://www.wba-advertising.com http://www.nex-core-mm.com http://www.eml-entertainments.com http://www.v-innovate.com
|
Protagonist Losangeles
Registered User
Join date: 7 Nov 2007
Posts: 29
|
04-28-2008 07:17
There are a couple of misnomers flying around this thread, which I think are causing confusion.
Basically all video editing involves the compression and decompression of data; the word codec comes from COmpression/DECompression). Compression therefore is not a bad thing, it's a fact of life in any form of video production/post production.
Different codecs are designed to do different jobs... and the problems only really start to occur in post production when you move your footage through too many conversions.
The loss of image quality during post production processes isn't directly related to the codec, but is largely to do with way your NLE handles the images. So, for instance, FCP handles video using various QT codecs, but works in an eight bit environment. So anything captured at standard def will start to deteriorate if it's gone through a high number of renders.
By comparison Abode After Effects works in a 16bit environment (may be more than that now, no time to fact check this) and you can composite and render to your hearts content without significant damage to your images.
At the same time, it's worth remembering that it's possible to use SD images to create a perfectly acceptable 35mm film print, capable of showing in a cinema anywhere in the world.
What this means in terms of practical capture and process of machinima is that it's overkill to capture data heavy images... and that it's more about creating a virtually loss-less workflow. As most HD cameras now record direct to hard drives, buffering your images through a camera is just exposing your workflow to a needless piece of kit, which isn't designed to handle that process.
I'd also argue that anyone planning to distribute the end product via the web, probably doesn't need to capture raw AVI and that the best workflow has to be dictated by your NLE's requirements.
So for Mac users, as FCP encodes everything in QT, then it makes sense to capture in a QT codec.
Personally I'm still experimenting with workflows and haven't made a decision on what works best, for what I want to achieve, yet.
|
AWM Mars
Scarey Dude :¬)
Join date: 10 Apr 2004
Posts: 3,398
|
04-29-2008 19:59
I think you have missed the point of the orginal post.... he wants to build a rig to capture and process the footage. Using Fraps as a screen capture, or any other capture programme that applies a Codec would result in compressed data being used in editing, it is not reversible by the editing programme. I capture without codec being applied and edit with good quality data, which in my opinion, gives us the quality of output which we do stream on the internet and into SL. In doing so, we do not have to worry about artifacts and more importantly, pixel size/shape. Drop by sometime and see our work and you will see. From that data, I can produce virtually any format/resolution upto 1600x1200 by adjusting the pixel size, and rendering crop window in Sony Vegas. Wether for the internet, DVD, HDMi, PAL, NSTC or one of our plasma wall screens in SL.
_____________________
*** Politeness is priceless when received, cost nothing to own or give, yet many cannot afford - Why do you only see typo's AFTER you have clicked submit? ** http://www.wba-advertising.com http://www.nex-core-mm.com http://www.eml-entertainments.com http://www.v-innovate.com
|
Protagonist Losangeles
Registered User
Join date: 7 Nov 2007
Posts: 29
|
04-30-2008 06:01
I haven't missed the point at all, but I'm not sure how to explain any clearer the connections between resolution/pixels and data aren't as simple as: Uncompressed is good - Compressed is bad Perhaps if you watch this short lecture by Dan Dennett about the nature of perception it may help: http://www.ted.com/index.php/talks/view/id/102Or maybe not. I'm glad that you've found a workflow that gives you what you want, but it's not the only way to do it and there will be codecs that provide a better workflow than the one you're currently using. I mean if you want near infinite resolution, why aren't you converting all your captured data to flash vector? Then you could zoom into any area of your image without incurring any resolution loss at all.
|
AWM Mars
Scarey Dude :¬)
Join date: 10 Apr 2004
Posts: 3,398
|
04-30-2008 15:52
I agree about Flash, I've been using it for years, but we make movies for SL, and for now, only the QuickTime is native, which only plays version 3 flash or less, which doesn't stream that well. My point about using a codec on captured footage, is simply if you start with YouCrude quality to begin with when editing and rendering, the results will not improve. Using the Mov format in the correct manner, we can acheive results that stream into SL, that can be watched clearly on a big screen, that have this sort of quality http://www.wba-advertising.com/MusicContest/waitforme.html so no real need for Flash alternatives.
_____________________
*** Politeness is priceless when received, cost nothing to own or give, yet many cannot afford - Why do you only see typo's AFTER you have clicked submit? ** http://www.wba-advertising.com http://www.nex-core-mm.com http://www.eml-entertainments.com http://www.v-innovate.com
|
Protagonist Losangeles
Registered User
Join date: 7 Nov 2007
Posts: 29
|
05-01-2008 04:36
"My point about using a codec on captured footage, is simply if you start with YouCrude quality to begin with when editing and rendering, the results will not improve."
OK. Now we're getting somewhere.
Of course it's true that if you capture using a low quality H264 codec... designed for webstreaming, then your picture quality can't go anywhere but down.
However, there are thousands of codec options -- so, for instance HD720p has a codec. So, if you're capturing at HD, then the loss in NLE's is almost zero (even in an 8 bit environment).
Basically, any of the codec choices which give you standard def or above quality are perfectly acceptable for almost any kind of film production.
The trick to avoid loss in post production is in understanding how NLE's work... and this is where understanding the bit environment becomes vital. 8 bit environments degrade images, with each successive render, because each render takes the image through a new compression/decompression cycle. 16 bit environments provide a better safe guard against loss.
Which brings me back to my original point, which is that image quality is as much about understanding what NLE's do to the images as it is about what the initial capture is.
Now, at this moment in time I don't have an answer to the question: "Which is the best codec to capture with?" -- I have my suspicions... but I'm still researching.
What I'm fairly confident in, is that the end answer won't be as data heavy as your workflow.
However, I'm also a believer in people sticking with what they know... and having seen your work, I can see that we're trying to do very different thing with the SL platform. So, my workflow probably wouldn't deliver what you're trying to achieve.
|
AWM Mars
Scarey Dude :¬)
Join date: 10 Apr 2004
Posts: 3,398
|
05-03-2008 05:11
I have thousands of Codec on my system, but it is the limited ones available for use by the capturing programmes which almost forces me to capture in raw data format.
Having been doing so for the past few years, we, as you say, have a workflow based on templates we have created. In answer to the OP, there is no 'right' codec, just a collection of wrong ones. Getting to the final output stage and acheiving quality, low data bit rates and usable resolution, is more a compromise, especially if you intend to stream the media into SL. Starting off with good quality footage, perhaps gives you more flexibility to make descisions on which elements you want to try reducing to get the best compromise. My advise, dont worry about codec when grabbing footage, preferably don't use one, and try using as passive a codec at rendering stage, and eachtime, redo it lowering in small increments the resolution (you can use pixel size/shape as well), fps, colour depth and codex settings. Once you get a good compromise, save that template.
_____________________
*** Politeness is priceless when received, cost nothing to own or give, yet many cannot afford - Why do you only see typo's AFTER you have clicked submit? ** http://www.wba-advertising.com http://www.nex-core-mm.com http://www.eml-entertainments.com http://www.v-innovate.com
|
Protagonist Losangeles
Registered User
Join date: 7 Nov 2007
Posts: 29
|
05-04-2008 05:22
One of the hardest discussions to have with anyone involved in post production is a discussion about codecs... simply because as AWN Mars rightly points out there are no "RIGHT" answers and there are plenty of wrong ones.
Which is the reason that we hold different views on workflows.
What needs to be taken into account when thinking about your own workflow, is exactly the look you're going for. AWN Mars captures from windlight at fairly high settings and wants the output to look like an accurate representation of the SL environment. That is a particular creative choice and his/her workflow mirrors that kind of production.
I've no interest in using SL that way and my stuff tends to look more like traditional 2D cell animation. So, I rarely use the windlight viewer and, if anything, tend to push the resolution within SL down rather than up. In post production I spend a massive amount of time on colour grading, softening the image and making the images look as little like Sl as possible.
Because of the end images I'm trying to create, even though I don't create footage for web streaming, I can afford to capture using one of the standard broadcast codecs... for me both the Apple Animation Codec and DV PAL both provide the results I want.
When people ask about questions about workflows there is no ONE RIGHT ANSWER, simply because both the desired end result and the software/processes used in post production all alter the answer.
The answer, therefore is to have an end result in mind and then experiment with various combinations of capture codec, and post production techniques... until you find one that works for you.
Something else to take into account is the amount of data you're creating in the process... especially in relationship to your editing equipment.
One of the downsides of data heavy capture is the data transfer speeds in post production.
So, if you bring together a combination of data heavy clips/external storage on a disc that spins too slowly/a slow CPU/ and insufficient RAM you may actually end up with worse end images, than someone who captured a faster codec. This is because your NLE will require you to render more often, and more renders in 8 bit environments mean a progressive loss of image quality.
Hope this helps someone. The answer is... play with what you've got and don't bolt on anything new until you've found out if you can achieve the results you want with the equipment you have.
|
AWM Mars
Scarey Dude :¬)
Join date: 10 Apr 2004
Posts: 3,398
|
05-06-2008 02:59
In conclusion, perhaps the OP could perhaps elaborate on what they wish to acheive?
Armed with that information, a much tighter and less general selection of answers can be offered. I was assuming (rightly or wrongly) that the OP wanted to simply make movies in SL for streaming back into SL.
_____________________
*** Politeness is priceless when received, cost nothing to own or give, yet many cannot afford - Why do you only see typo's AFTER you have clicked submit? ** http://www.wba-advertising.com http://www.nex-core-mm.com http://www.eml-entertainments.com http://www.v-innovate.com
|
Protagonist Losangeles
Registered User
Join date: 7 Nov 2007
Posts: 29
|
05-06-2008 06:04
I agree, but thanks for the conversation... machinima is still in its infancy technically and much video editing technology isn't yet optimised for workflows that start with screen capture technology.
I think it's important to have these conversations and as a result of what we've discussed I am looking more deeply into capture and processing.
Maybe we should be thinking about an ongoing thread to discuss new work-flows and post production techniques.
|
AWM Mars
Scarey Dude :¬)
Join date: 10 Apr 2004
Posts: 3,398
|
05-07-2008 03:06
From: Protagonist Losangeles I agree, but thanks for the conversation... machinima is still in its infancy technically and much video editing technology isn't yet optimised for workflows that start with screen capture technology.
I think it's important to have these conversations and as a result of what we've discussed I am looking more deeply into capture and processing.
Maybe we should be thinking about an ongoing thread to discuss new work-flows and post production techniques. There have been many attemtps at starting threads on the prolifercation of elements appertaining to Machinima but have fizzled out. LL started a wiki on the matter, which lists various software available (although not exhaustive). To be able to do this successfully, we need the Machinima community to get behind any spearhead and keep it alive enough that LL make the thread sticky. I believe there are two main headings. 1) Those that want to create Machinima that can be streamed back into SL. 2) Those that want to use SL and various other platforms, as a workshop, so their creations can be outfaced into RL.
_____________________
*** Politeness is priceless when received, cost nothing to own or give, yet many cannot afford - Why do you only see typo's AFTER you have clicked submit? ** http://www.wba-advertising.com http://www.nex-core-mm.com http://www.eml-entertainments.com http://www.v-innovate.com
|