Can you recommend me a 3D camera for videos and pictures?
  1 / 2    
I'm asking in case someone here has one of these. Are there cameras that for example record videos at 4k@60fps per eye (or at least 1080p@60fps per eye), customizable depth and convergence (or at least very high fixed depth and customizable convergence), very high resolution pictures, minimal motion blur and depth of field, etc? I don't go outside a lot, but I would like to take at least some 3D videos and pictures. I don't know if I would buy it soon anyway, because right now I have other priorities (saving for Pascal, and maybe a mechanical keyboard and a good chair). I just want to know if that kind of cameras exist for consumers (a.k.a. not James Cameron).
I'm asking in case someone here has one of these. Are there cameras that for example record videos at 4k@60fps per eye (or at least 1080p@60fps per eye), customizable depth and convergence (or at least very high fixed depth and customizable convergence), very high resolution pictures, minimal motion blur and depth of field, etc?

I don't go outside a lot, but I would like to take at least some 3D videos and pictures. I don't know if I would buy it soon anyway, because right now I have other priorities (saving for Pascal, and maybe a mechanical keyboard and a good chair). I just want to know if that kind of cameras exist for consumers (a.k.a. not James Cameron).

CPU: Intel Core i7 7700K @ 4.9GHz
Motherboard: Gigabyte Aorus GA-Z270X-Gaming 5
RAM: GSKILL Ripjaws Z 16GB 3866MHz CL18
GPU: Gainward Phoenix 1080 GLH
Monitor: Asus PG278QR
Speakers: Logitech Z506
Donations account: masterotakusuko@gmail.com

#1
Posted 01/30/2016 10:07 AM   
[quote="masterotaku"]customizable depth and convergence (or at least very high fixed depth and customizable convergence)[/quote]...[quote]I just want to know if that kind of cameras exist for consumers (a.k.a. not James Cameron).[/quote]Heh, that doesn't even exist for James Cameron. 3D cameras don't exactly have a convergence control like we are used to with 3D Vision. What they have instead is a distance between the lenses, which is fixed on all consumer cameras, and a parallax control, which is just offsetting the left and right images. They don't use an off-center projection, so the stereo image does not look as good as a game, and they aren't calibrated to the screen size like 3D Vision is, so they either won't have enough depth (and certainly not enough range of depth), or will violate infinity without a parallax adjustment. That said, they can still be fun. I've got a Fujifilm FinePix Real 3D W3 - you might like to read my recent blog post and take a look at some of the photos I've uploaded: http://blog.darkstarsword.net/2015/12/stereo-photography.html http://valen.darkstarsword.net/photos/stereo/ My perfect stereo photo rig would consist of: [.]Two identical DSLRs[/.] [.]Two identical tilt-shift lenses (not even hollywood uses these, but my feeling is they should allow for a better 3D effect as they can produce something similar to the off-center projection we use in 3D games)[/.] [.]A rig to: [list] [.]hold both cameras level (and steady)[/.] [.]Allow the distance between them to be easily adjusted[/.] [.]Keep the settings of both lenses synchronised (and the cameras if possible)[/.] [.]Fire both cameras shutters simultaneously (possibly with an external flash firing at the same time)[/.] [/list][/.] [.]A cheat sheet with lens distance and shift settings for various scenarios and screen sizes because working that out on the fly would be hard[/.] [.]A lenticular glasses free 3D screen with the live view feeds of both cameras fed into it[/.] But that's a little out of my budget for now.
masterotaku said:customizable depth and convergence (or at least very high fixed depth and customizable convergence)
...
I just want to know if that kind of cameras exist for consumers (a.k.a. not James Cameron).
Heh, that doesn't even exist for James Cameron. 3D cameras don't exactly have a convergence control like we are used to with 3D Vision. What they have instead is a distance between the lenses, which is fixed on all consumer cameras, and a parallax control, which is just offsetting the left and right images.

They don't use an off-center projection, so the stereo image does not look as good as a game, and they aren't calibrated to the screen size like 3D Vision is, so they either won't have enough depth (and certainly not enough range of depth), or will violate infinity without a parallax adjustment.

That said, they can still be fun. I've got a Fujifilm FinePix Real 3D W3 - you might like to read my recent blog post and take a look at some of the photos I've uploaded:

http://blog.darkstarsword.net/2015/12/stereo-photography.html
http://valen.darkstarsword.net/photos/stereo/


My perfect stereo photo rig would consist of:

  • Two identical DSLRs
  • Two identical tilt-shift lenses (not even hollywood uses these, but my feeling is they should allow for a better 3D effect as they can produce something similar to the off-center projection we use in 3D games)
  • A rig to:
    • hold both cameras level (and steady)
    • Allow the distance between them to be easily adjusted
    • Keep the settings of both lenses synchronised (and the cameras if possible)
    • Fire both cameras shutters simultaneously (possibly with an external flash firing at the same time)
  • A cheat sheet with lens distance and shift settings for various scenarios and screen sizes because working that out on the fly would be hard
  • A lenticular glasses free 3D screen with the live view feeds of both cameras fed into it

  • But that's a little out of my budget for now.

    2x Geforce GTX 980 in SLI provided by NVIDIA, i7 6700K 4GHz CPU, Asus 27" VG278HE 144Hz 3D Monitor, BenQ W1070 3D Projector, 120" Elite Screens YardMaster 2, 32GB Corsair DDR4 3200MHz RAM, Samsung 850 EVO 500G SSD, 4x750GB HDD in RAID5, Gigabyte Z170X-Gaming 7 Motherboard, Corsair Obsidian 750D Airflow Edition Case, Corsair RM850i PSU, HTC Vive, Win 10 64bit

    Alienware M17x R4 w/ built in 3D, Intel i7 3740QM, GTX 680m 2GB, 16GB DDR3 1600MHz RAM, Win7 64bit, 1TB SSD, 1TB HDD, 750GB HDD

    Pre-release 3D fixes, shadertool.py and other goodies: http://github.com/DarkStarSword/3d-fixes
    Support me on Patreon: https://www.patreon.com/DarkStarSword or PayPal: https://www.paypal.me/DarkStarSword

    #2
    Posted 01/30/2016 01:52 PM   
    It's clearly not what you are looking for but I recently bought my second Fujifilm W3 as my first one evaporated into thin air. I really believe that 3D for the most part should be made with lenses separated by normal eye separation. What we are doing with 3D Vision is usually having completely different eye separation. I noticed that Mirror's Edge was using heavy hyperstereo setup if using 100% depth on my monitor. I'm trying to wrap my head around convergence as both my 3D camera and 3D Vision takes parallell images but heavily modifying the parallax in the 3D image from my camera changes the infinite depth width on the monitor while doing the same in 3D Vision keeps the same on screen depth. My only explanation is that 3D Vision must move the camera positions when adjusting convergence. What 3D Vision is doing is pretty opposite to standard photo or movie making in 3D. The Fujifilm 3W is far from ideal it has much more noise on the tiny sensors compared to my DSLR. Trying to get two DSLRs close enough together will be as hard as getting two high end movie cameras close together. To see some PRO 3D setups find the behind the scenes episode on the first Hobbit movie. Two RED cameras should definitely capture 4K 3D not sure about 60hz though. Hobbit only went for 48hz. A RED setup should be less than $250,000 but don't quote me on that getting a rig working and appropriate lenses and setup requires some experience. There are some interesting addons for W3 to either widen the eye separation or narrow it down for close-ups. For non-moving targets it's even possible to use a single good camera and a tripod where you can easily move the camera back and forth. It was used when shooting timelapse of mushrooms in Sky's 3D plants with like a 1cm stereo separation giving enhanced 3D for such small targets.
    It's clearly not what you are looking for but I recently bought my second Fujifilm W3 as my first one evaporated into thin air.

    I really believe that 3D for the most part should be made with lenses separated by normal eye separation. What we are doing with 3D Vision is usually having completely different eye separation.

    I noticed that Mirror's Edge was using heavy hyperstereo setup if using 100% depth on my monitor.
    I'm trying to wrap my head around convergence as both my 3D camera and 3D Vision takes parallell images but heavily modifying the parallax in the 3D image from my camera changes the infinite depth width on the monitor while doing the same in 3D Vision keeps the same on screen depth. My only explanation is that 3D Vision must move the camera positions when adjusting convergence.

    What 3D Vision is doing is pretty opposite to standard photo or movie making in 3D.
    The Fujifilm 3W is far from ideal it has much more noise on the tiny sensors compared to my DSLR.
    Trying to get two DSLRs close enough together will be as hard as getting two high end movie cameras close together. To see some PRO 3D setups find the behind the scenes episode on the first Hobbit movie. Two RED cameras should definitely capture 4K 3D not sure about 60hz though. Hobbit only went for 48hz. A RED setup should be less than $250,000 but don't quote me on that getting a rig working and appropriate lenses and setup requires some experience.

    There are some interesting addons for W3 to either widen the eye separation or narrow it down for close-ups. For non-moving targets it's even possible to use a single good camera and a tripod where you can easily move the camera back and forth. It was used when shooting timelapse of mushrooms in Sky's 3D plants with like a 1cm stereo separation giving enhanced 3D for such small targets.

    Thanks to everybody using my assembler it warms my heart.
    To have a critical piece of code that everyone can enjoy!
    What more can you ask for?

    donations: ulfjalmbrant@hotmail.com

    #3
    Posted 01/30/2016 02:31 PM   
    We were all waiting for a W5, right? Suck that Fuji abandonned, I mean they had no competition and I think there was enough amateurs of 3D around the world for them.
    We were all waiting for a W5, right? Suck that Fuji abandonned, I mean they had no competition and I think there was enough amateurs of 3D around the world for them.

    3D Vision must live! NVIDIA, don't let us down!

    #4
    Posted 01/30/2016 06:12 PM   
    What could a W5 really offer except 1080p video? Maybe a slight increase to the 10MP sensors but they would still be physically tiny with the same problem. I would probably have desired even wider angles in the lenses as capturing table tennis in a fairly small room was pretty hard/impossible. More telezoom not really making that much impact when taking photos in 3D. They would be competing with themselves with only a marginally better product as far as I can tell. It's also really a niche product to begin with. Found a really old blog post of a basic setup here: http://jamesboydpresents.blogspot.se/2010/09/new-3d-micro-dslr-rig-from-redrock.html I think you need something more advanced to properly control zoom and focus. You need to be lucky to focus at the same distance with two SLR and rigging manual focus puller usually requires special lenses. 3D movies filmed with SLR do have significant teams and special hardware.
    What could a W5 really offer except 1080p video?

    Maybe a slight increase to the 10MP sensors but they would still be physically tiny with the same problem.

    I would probably have desired even wider angles in the lenses as capturing table tennis in a fairly small room was pretty hard/impossible. More telezoom not really making that much impact when taking photos in 3D.

    They would be competing with themselves with only a marginally better product as far as I can tell.

    It's also really a niche product to begin with.

    Found a really old blog post of a basic setup here:

    http://jamesboydpresents.blogspot.se/2010/09/new-3d-micro-dslr-rig-from-redrock.html


    I think you need something more advanced to properly control zoom and focus.

    You need to be lucky to focus at the same distance with two SLR and rigging manual focus puller usually requires special lenses. 3D movies filmed with SLR do have significant teams and special hardware.

    Thanks to everybody using my assembler it warms my heart.
    To have a critical piece of code that everyone can enjoy!
    What more can you ask for?

    donations: ulfjalmbrant@hotmail.com

    #5
    Posted 01/30/2016 07:11 PM   
    I take back some of my points after seeing this image: http://media-cache-ak0.pinimg.com/736x/97/c3/54/97c354fd69bfc0d4e918e751d8bd1ed2.jpg It actually offered something interesting with multiple lenses giving three possible lens distances.
    I take back some of my points after seeing this image:

    http://media-cache-ak0.pinimg.com/736x/97/c3/54/97c354fd69bfc0d4e918e751d8bd1ed2.jpg


    It actually offered something interesting with multiple lenses giving three possible lens distances.

    Thanks to everybody using my assembler it warms my heart.
    To have a critical piece of code that everyone can enjoy!
    What more can you ask for?

    donations: ulfjalmbrant@hotmail.com

    #6
    Posted 01/30/2016 07:15 PM   
    Thanks for the answers, everyone. That Fujifilm W3 camera sounds good, but at least on amazon it's almost impossible to find it new (and at inflated prices). Well, as I said, I was only mildly interested. I spend more time looking at my monitor than at real landscapes. And thank you for your pictures, DarkStarSword. Increasing the horizontal parallax with the stereoscopic player made them have the depth I'm used to, and the high convergence of the camera made them still look great even if the image was pushed into depth.
    Thanks for the answers, everyone. That Fujifilm W3 camera sounds good, but at least on amazon it's almost impossible to find it new (and at inflated prices). Well, as I said, I was only mildly interested. I spend more time looking at my monitor than at real landscapes.

    And thank you for your pictures, DarkStarSword. Increasing the horizontal parallax with the stereoscopic player made them have the depth I'm used to, and the high convergence of the camera made them still look great even if the image was pushed into depth.

    CPU: Intel Core i7 7700K @ 4.9GHz
    Motherboard: Gigabyte Aorus GA-Z270X-Gaming 5
    RAM: GSKILL Ripjaws Z 16GB 3866MHz CL18
    GPU: Gainward Phoenix 1080 GLH
    Monitor: Asus PG278QR
    Speakers: Logitech Z506
    Donations account: masterotakusuko@gmail.com

    #7
    Posted 01/30/2016 11:14 PM   
    Yah, your pretty limited on choices. Panasonic made a 3D camera, but it's discontinued. Using a previous generation of the GoPro, you could place two of them side by side, with one of them being upside in a special container that was waterproof. You'd have to edit the separate images together during editing. AFAIK, the special container is not being made for the new generation of GoPros. There's quite a few GoPro 3D Videos on YouTube that you can check out and see that they worked fairly well. Also keep in mind that YouTube probably doesn't do the original footage justice. https://www.youtube.com/watch?v=AYJ1YBL39lo&list=PLjxhf9hlu9OEmJsq8QXZ8jOT_i_R0b3HM EDIT: Here's the container https://www.youtube.com/watch?v=0A6C3Apy0mI
    Yah, your pretty limited on choices.

    Panasonic made a 3D camera, but it's discontinued.

    Using a previous generation of the GoPro, you could place two of them side by side, with one of them being upside in a special container that was waterproof. You'd have to edit the separate images together during editing.

    AFAIK, the special container is not being made for the new generation of GoPros.

    There's quite a few GoPro 3D Videos on YouTube that you can check out and see that they worked fairly well.

    Also keep in mind that YouTube probably doesn't do the original footage justice.

    ;list=PLjxhf9hlu9OEmJsq8QXZ8jOT_i_R0b3HM

    EDIT: Here's the container

    #8
    Posted 01/30/2016 11:45 PM   
    [quote="Flugan"]I really believe that 3D for the most part should be made with lenses separated by normal eye separation.[/quote]Whether that is true or not depends on the stereo projection and how they are going to be viewed. The limit is that an object at infinity should produce (or appear to produce) parallel beams of light entering your eyes. If the photo is to be displayed on an intermediate screen this will never be the case, so they are instead placed offset by the viewer's IPD to appear that way. The intermediate screen is also the reason we use a skewed off-center projection in 3D Vision - eliminate the intermediate screen and you might eliminate the need for these as well. VR goggles might present a new answer here as a VR projection is a *lot* closer to how your eyes see than 3D Vision (the cameras used for VR are offset)... However it's still not exactly the same as it still uses an off-center projection matrix to compensate for the distance between the viewers eyes and the center of each side of the VR screen. [quote]I'm trying to wrap my head around convergence as both my 3D camera and 3D Vision takes parallell images[/quote]Not quite - your camera and your eyes do that, but 3D Vision uses a skewed off-center projection to account for the screen it is to be displayed on. [quote]but heavily modifying the parallax in the 3D image from my camera changes the infinite depth width on the monitor while doing the same in 3D Vision keeps the same on screen depth.[/quote]Right. 3D Vision's stereo projection is calibrated so that infinity will be 6.5cm on your monitor, but it can only do that because it knows the size of the monitor and uses an off-center projection. A 3D photograph or film doesn't do any of that, and there currently exists no perfect way to view them as a result so the projection will not look right (though as I mentioned above, VR might be able to largely solve that problem with existing 3D cameras, or my idea of using tilt-shift lenses while taking the photo might allow for a perfect stereo capture to be displayed on a screen of a specific size). [quote]My only explanation is that 3D Vision must move the camera positions when adjusting convergence.[/quote]Kinda... VR style stereo projections adjust the view matrix by the viewers IPD, which results in the cameras being offset in the world (their adjustment to the projection matrix is purely to account for the misalignment between the eyes and the VR screen). 3D Vision on the other hand does not touch the view matrix, so the cameras are not technically moved in the world, though obviously they do appear to be and there is a simple calculation you can use to determine by how much: [code] If the stereo correction formula is: x' = x + separation * (depth - convergence) Then if the mono camera is at x=0, depth=0, the stereo camera will be at: x' = -separation * convergence where separation is the ratio of IPD / screen width EDIT: That is not quite right... that is still in projection space, but we need the answer in view-space to find the equivalent real world amount: x' = -IPD / screen width * convergence * sensor width / (2 * focal length) [/code] But that only works in conjunction with the off-center projection - simply offsetting the cameras by that much won't work. [quote]The Fujifilm 3W is far from ideal it has much more noise on the tiny sensors compared to my DSLR.[/quote]Agreed. I also find that despite the camera being a single unit it occasionally will focus each lens separately, which has ruined a couple of otherwise great shots. [quote]Trying to get two DSLRs close enough together will be as hard as getting two high end movie cameras close together. To see some PRO 3D setups find the behind the scenes episode on the first Hobbit movie. Two RED cameras should definitely capture 4K 3D not sure about 60hz though. Hobbit only went for 48hz. A RED setup should be less than $250,000 but don't quote me on that getting a rig working and appropriate lenses and setup requires some experience.[/quote]The setup I'm thinking could easily cost up to $8000 AUD for two full frame DSLRs and two tilt-shift lenses (bargain hunting and grey imports might get that down considerably). I don't have a ballpark for how much the rig would cost to build, but it would be somewhat complicated by the fact that the lenses are not going to be parallel thanks to their tilt-shift nature, complicating the mechanism to keep both lenses focus rings synchronised, which would be required as autofocus is not available on tilt-shift lenses. At that price it's a little out of my budget, and I'd probably want to rent the equipment first to make sure it is going to do what I want and to confirm that tilt-shift lenses can produce the right projection, and decide which focal length to go for. If I ditched the idea of using tilt-shift lenses (I'd just be building yet another stereo rig the same as everyone else') I could get the price down further and eliminate the need to use the rig to mechanically synchronise the lenses and instead implement it in Magic Lantern (open source custom firmware overlay for some Canon DSLRs), which is already capable of adjusting the focus on any autofocus capable lens. I could also use my existing Canon 500D as one of the two cameras, but if/when I buy a new camera body it will likely to be a 6D (or whatever has replaced it by then), which is full frame so I'd need to crop to compensate for that, and any other differences between them (different image sensors are a concern to get the images from both using the same colours, white balance, low light performance, high ISO grain control, etc. Partially solvable in post, but an extra complication that would be nicer to avoid by using two identical cameras). [quote="Flugan"]I would probably have desired even wider angles in the lenses as capturing table tennis in a fairly small room was pretty hard/impossible. More telezoom not really making that much impact when taking photos in 3D.[/quote]Yep - higher focal length (AKA lower FOV) compresses depth (the dolly zoom effect you see in movies is possibly the best demonstration of this). This happens in 2D and 3D and even happens in games (there's a couple of cutscenes in Dreamfall Chapters that I feel were ruined by using too low FOV, fortunately only a couple). In 2D photos this can be used as one of the methods to "blur the background" (technically it magnifies the background - wide apperature blurs it), but in 3D it just looks crap and should be avoided. [quote]Found a really old blog post of a basic setup here: http://jamesboydpresents.blogspot.se/2010/09/new-3d-micro-dslr-rig-from-redrock.html I think you need something more advanced to properly control zoom and focus. You need to be lucky to focus at the same distance with two SLR and rigging manual focus puller usually requires special lenses.[/quote]I agree, and while the set up I'm thinking of doesn't have zoom to worry about because they are prime lenses, synchronising the focus would be even harder than a regular lens (and this is an area where my skills are limited, so I wouldn't really know where to begin, but I know people who might be able to solve this). I haven't tried it so maybe I'm oversimplifying things, but I would have thought that a regular lens with a decent focus ring (a full time manual focus ring would probably be a good idea) should be relatively easy to synchronise with a basic belt and the right amount on tension, though doing so in a way that allows the cameras distance to be adjusted would be harder. Or as I mentioned above by using a custom firmware to manually control the auto focus on an AF capable lens could solve the problem if it is precise enough (Magic Lantern can partially do this, but would need some more code to synchronise two cameras together). [quote="Flugan"]I take back some of my points after seeing this image: http://media-cache-ak0.pinimg.com/736x/97/c3/54/97c354fd69bfc0d4e918e751d8bd1ed2.jpg It actually offered something interesting with multiple lenses giving three possible lens distances.[/quote]That's pretty cool - is that legit?
    Flugan said:I really believe that 3D for the most part should be made with lenses separated by normal eye separation.
    Whether that is true or not depends on the stereo projection and how they are going to be viewed. The limit is that an object at infinity should produce (or appear to produce) parallel beams of light entering your eyes. If the photo is to be displayed on an intermediate screen this will never be the case, so they are instead placed offset by the viewer's IPD to appear that way. The intermediate screen is also the reason we use a skewed off-center projection in 3D Vision - eliminate the intermediate screen and you might eliminate the need for these as well. VR goggles might present a new answer here as a VR projection is a *lot* closer to how your eyes see than 3D Vision (the cameras used for VR are offset)... However it's still not exactly the same as it still uses an off-center projection matrix to compensate for the distance between the viewers eyes and the center of each side of the VR screen.

    I'm trying to wrap my head around convergence as both my 3D camera and 3D Vision takes parallell images
    Not quite - your camera and your eyes do that, but 3D Vision uses a skewed off-center projection to account for the screen it is to be displayed on.

    but heavily modifying the parallax in the 3D image from my camera changes the infinite depth width on the monitor while doing the same in 3D Vision keeps the same on screen depth.
    Right. 3D Vision's stereo projection is calibrated so that infinity will be 6.5cm on your monitor, but it can only do that because it knows the size of the monitor and uses an off-center projection. A 3D photograph or film doesn't do any of that, and there currently exists no perfect way to view them as a result so the projection will not look right (though as I mentioned above, VR might be able to largely solve that problem with existing 3D cameras, or my idea of using tilt-shift lenses while taking the photo might allow for a perfect stereo capture to be displayed on a screen of a specific size).

    My only explanation is that 3D Vision must move the camera positions when adjusting convergence.
    Kinda... VR style stereo projections adjust the view matrix by the viewers IPD, which results in the cameras being offset in the world (their adjustment to the projection matrix is purely to account for the misalignment between the eyes and the VR screen). 3D Vision on the other hand does not touch the view matrix, so the cameras are not technically moved in the world, though obviously they do appear to be and there is a simple calculation you can use to determine by how much:

    If the stereo correction formula is:
    x' = x + separation * (depth - convergence)
    Then if the mono camera is at x=0, depth=0, the stereo camera will be at:
    x' = -separation * convergence

    where separation is the ratio of IPD / screen width

    EDIT: That is not quite right... that is still in projection space, but
    we need the answer in view-space to find the equivalent real world amount:
    x' = -IPD / screen width * convergence * sensor width / (2 * focal length)


    But that only works in conjunction with the off-center projection - simply offsetting the cameras by that much won't work.

    The Fujifilm 3W is far from ideal it has much more noise on the tiny sensors compared to my DSLR.
    Agreed. I also find that despite the camera being a single unit it occasionally will focus each lens separately, which has ruined a couple of otherwise great shots.

    Trying to get two DSLRs close enough together will be as hard as getting two high end movie cameras close together. To see some PRO 3D setups find the behind the scenes episode on the first Hobbit movie. Two RED cameras should definitely capture 4K 3D not sure about 60hz though. Hobbit only went for 48hz. A RED setup should be less than $250,000 but don't quote me on that getting a rig working and appropriate lenses and setup requires some experience.
    The setup I'm thinking could easily cost up to $8000 AUD for two full frame DSLRs and two tilt-shift lenses (bargain hunting and grey imports might get that down considerably). I don't have a ballpark for how much the rig would cost to build, but it would be somewhat complicated by the fact that the lenses are not going to be parallel thanks to their tilt-shift nature, complicating the mechanism to keep both lenses focus rings synchronised, which would be required as autofocus is not available on tilt-shift lenses.

    At that price it's a little out of my budget, and I'd probably want to rent the equipment first to make sure it is going to do what I want and to confirm that tilt-shift lenses can produce the right projection, and decide which focal length to go for.

    If I ditched the idea of using tilt-shift lenses (I'd just be building yet another stereo rig the same as everyone else') I could get the price down further and eliminate the need to use the rig to mechanically synchronise the lenses and instead implement it in Magic Lantern (open source custom firmware overlay for some Canon DSLRs), which is already capable of adjusting the focus on any autofocus capable lens. I could also use my existing Canon 500D as one of the two cameras, but if/when I buy a new camera body it will likely to be a 6D (or whatever has replaced it by then), which is full frame so I'd need to crop to compensate for that, and any other differences between them (different image sensors are a concern to get the images from both using the same colours, white balance, low light performance, high ISO grain control, etc. Partially solvable in post, but an extra complication that would be nicer to avoid by using two identical cameras).
    Flugan said:I would probably have desired even wider angles in the lenses as capturing table tennis in a fairly small room was pretty hard/impossible. More telezoom not really making that much impact when taking photos in 3D.
    Yep - higher focal length (AKA lower FOV) compresses depth (the dolly zoom effect you see in movies is possibly the best demonstration of this). This happens in 2D and 3D and even happens in games (there's a couple of cutscenes in Dreamfall Chapters that I feel were ruined by using too low FOV, fortunately only a couple). In 2D photos this can be used as one of the methods to "blur the background" (technically it magnifies the background - wide apperature blurs it), but in 3D it just looks crap and should be avoided.

    Found a really old blog post of a basic setup here:
    http://jamesboydpresents.blogspot.se/2010/09/new-3d-micro-dslr-rig-from-redrock.html

    I think you need something more advanced to properly control zoom and focus.

    You need to be lucky to focus at the same distance with two SLR and rigging manual focus puller usually requires special lenses.
    I agree, and while the set up I'm thinking of doesn't have zoom to worry about because they are prime lenses, synchronising the focus would be even harder than a regular lens (and this is an area where my skills are limited, so I wouldn't really know where to begin, but I know people who might be able to solve this).

    I haven't tried it so maybe I'm oversimplifying things, but I would have thought that a regular lens with a decent focus ring (a full time manual focus ring would probably be a good idea) should be relatively easy to synchronise with a basic belt and the right amount on tension, though doing so in a way that allows the cameras distance to be adjusted would be harder.

    Or as I mentioned above by using a custom firmware to manually control the auto focus on an AF capable lens could solve the problem if it is precise enough (Magic Lantern can partially do this, but would need some more code to synchronise two cameras together).

    Flugan said:I take back some of my points after seeing this image:
    http://media-cache-ak0.pinimg.com/736x/97/c3/54/97c354fd69bfc0d4e918e751d8bd1ed2.jpg

    It actually offered something interesting with multiple lenses giving three possible lens distances.
    That's pretty cool - is that legit?

    2x Geforce GTX 980 in SLI provided by NVIDIA, i7 6700K 4GHz CPU, Asus 27" VG278HE 144Hz 3D Monitor, BenQ W1070 3D Projector, 120" Elite Screens YardMaster 2, 32GB Corsair DDR4 3200MHz RAM, Samsung 850 EVO 500G SSD, 4x750GB HDD in RAID5, Gigabyte Z170X-Gaming 7 Motherboard, Corsair Obsidian 750D Airflow Edition Case, Corsair RM850i PSU, HTC Vive, Win 10 64bit

    Alienware M17x R4 w/ built in 3D, Intel i7 3740QM, GTX 680m 2GB, 16GB DDR3 1600MHz RAM, Win7 64bit, 1TB SSD, 1TB HDD, 750GB HDD

    Pre-release 3D fixes, shadertool.py and other goodies: http://github.com/DarkStarSword/3d-fixes
    Support me on Patreon: https://www.patreon.com/DarkStarSword or PayPal: https://www.paypal.me/DarkStarSword

    #9
    Posted 01/31/2016 06:57 AM   
    As far as I can tell the image appeared on a review site forum either as a prank or a preview of a camera that died on the drawing board. It feels too clever to be fake but obviously no such camera has been seen in practice as it was never made. A better link is here: http://www.dpreview.com/forums/thread/3380943 Same image. Back on 3D Vision topic. If you look at the documentation both virtual eyes are pointing straight forward and there is no toe-in. They do crop the images left and right for each eye to make them match up on a single virtual screen. This means that the angle from right eye to right side of the screen is significantly smaller than angle from right eye to left side of the screen and the opposite is true for left eye. The stereo correction formula confirms my statement that the eye separation changes when you adjust convergence. It truly is the opposite way of looking at producing 3D images while never having standard eye distance between the virtual eyes. 3D Vision allows 100% depth on any screen with pretty much any convergence until the image breaks. I tried your 17" 1080p images on my 27" and I couldn't watch them in full-screen had to be in windowed mode. While you can go for hyperstereo the 7,7cm separation of my W3 gives me about 2cm separation at infinity on my 27" which is not surprisingly the same separation I've measured in a 3D Blu-ray. I've not measured 3D Blu-rays in a while so focusing on the 2cm 27" measurement you only need a 105" screen to get the original infinite depth. This might be a small figure but 3D movies being viewed on 813" screens is a bit scary. Sorry for derailing this thread completely, I really don't know any 4K 60hz 3D cameras that you can buy. Everything highend is done in various more or less advanced 3D rigs and 4K 60hz requires an expensive state of the art rig.
    As far as I can tell the image appeared on a review site forum either as a prank or a preview of a camera that died on the drawing board. It feels too clever to be fake but obviously no such camera has been seen in practice as it was never made.

    A better link is here:

    http://www.dpreview.com/forums/thread/3380943


    Same image.

    Back on 3D Vision topic.
    If you look at the documentation both virtual eyes are pointing straight forward and there is no toe-in.
    They do crop the images left and right for each eye to make them match up on a single virtual screen.
    This means that the angle from right eye to right side of the screen is significantly smaller than angle from right eye to left side of the screen and the opposite is true for left eye.

    The stereo correction formula confirms my statement that the eye separation changes when you adjust convergence. It truly is the opposite way of looking at producing 3D images while never having standard eye distance between the virtual eyes. 3D Vision allows 100% depth on any screen with pretty much any convergence until the image breaks.

    I tried your 17" 1080p images on my 27" and I couldn't watch them in full-screen had to be in windowed mode. While you can go for hyperstereo the 7,7cm separation of my W3 gives me about 2cm separation at infinity on my 27" which is not surprisingly the same separation I've measured in a 3D Blu-ray. I've not measured 3D Blu-rays in a while so focusing on the 2cm 27" measurement you only need a 105" screen to get the original infinite depth. This might be a small figure but 3D movies being viewed on 813" screens is a bit scary.

    Sorry for derailing this thread completely, I really don't know any 4K 60hz 3D cameras that you can buy. Everything highend is done in various more or less advanced 3D rigs and 4K 60hz requires an expensive state of the art rig.

    Thanks to everybody using my assembler it warms my heart.
    To have a critical piece of code that everyone can enjoy!
    What more can you ask for?

    donations: ulfjalmbrant@hotmail.com

    #10
    Posted 01/31/2016 09:56 PM   
    Back on the W5 apparently a fake from start to finish.
    Back on the W5 apparently a fake from start to finish.

    Thanks to everybody using my assembler it warms my heart.
    To have a critical piece of code that everyone can enjoy!
    What more can you ask for?

    donations: ulfjalmbrant@hotmail.com

    #11
    Posted 01/31/2016 10:00 PM   
    GoPro has a camera capable of 4K at 30FPS in 2D. http://shop.gopro.com/ There was talk about adding 3D capability to the GoPro 4. So far 3D is limited to the first 3 generations and uses a cable between the two of them for frame lock. The FAQ says that the distance between the two camera lenses when seated in the 3D HERO System is 33 mm https://gopro.com/support/3d-hero-system-support
    GoPro has a camera capable of 4K at 30FPS in 2D. http://shop.gopro.com/

    There was talk about adding 3D capability to the GoPro 4.

    So far 3D is limited to the first 3 generations and uses a cable between the two of them for frame lock.

    The FAQ says that the distance between the two camera lenses when seated in the 3D HERO System is 33 mm

    https://gopro.com/support/3d-hero-system-support

    #12
    Posted 02/01/2016 12:51 AM   
    Panasonic Z10000 - good choice
    Panasonic Z10000 - good choice

    4K3D on passive LG OLED 4K TV 65C6V, GTX 1080 Ti, Win 8.1 64 Pro, i7-7700, 3D-Vision 2 on Benq LW61-LED PJ. HTC Vive. Panasonic Z-10000 3D Camcorder

    #13
    Posted 02/01/2016 03:52 PM   
    Interesting read on Ortho-Stereoscopic Perspective (OSP) http://www.cyclopital3d.com/The_Ortho-stereoscopic_Persective_and_3D_Realism.pdf The OSP is obtained when the physical Field Of View (FOV) of the viewed image is the same FOV that the camera recorded. This is a very important factor in achieving the realism promised by 3D photography.
    Interesting read on Ortho-Stereoscopic Perspective (OSP)


    http://www.cyclopital3d.com/The_Ortho-stereoscopic_Persective_and_3D_Realism.pdf


    The OSP is obtained when the physical Field
    Of View (FOV) of the viewed image is the same FOV that the camera recorded. This is a
    very important factor in achieving the realism promised by 3D photography.

    #14
    Posted 02/03/2016 09:05 PM   
    [quote]Most digital 3D viewing systems (computer monitors or “picture frames”) provide only a very narrow FOV as compared to “what the camera saw,” resulting in severe 3D distortion. When a stereoscopic image is viewed with a FOV that is narrower than the FOV of the camera, the “stretch” in the z-axis makes objects in the scene look smaller than real life, like a scale model, the scene does not look “real.”[/quote]Their observations are mostly accurate, but their reason is wrong - the FOV mismatch is a problem for trying to achieve OSP, but when using a screen or print we aren't trying to achieve OSP and problems we face are not the same (and photography is generally not about reproducing the scene exactly anyway, so why should 3D photography be constrained to this?). We *don't care about FOV being realistic* - in fact, I would suggest intentionally using a wider FOV and moving closer to the subject to maximise the 3D effect. The problems we face are 1) a mismatched stereo projection that fails to take the maths involved with viewing an intermediate screen into account (and this is why I would love a chance to experiment with using tilt-shift lenses for stereo photography), and 2) inability to calibrate the stereo projection to the screen size and viewer's IPD (the only thing we can adjust is the parallax, which is a very poor substitute for this). OSP has it's place, and I hope to see this reproduced in VR (which should solve the FOV mismatch issue the paper is about and what he mentions about looking in the same direction the photo was taken - if they start adding accelerometers to 3D cameras a VR setup could easily display the photo in the correct orientation), but it is a completely different effect to what we produce in 3D Vision, and I'm not convinced that "more realistic" is always "better"... however 3D cameras can't do what we do with 3D Vision so perhaps it may generally be better for them.
    Most digital 3D viewing systems (computer monitors or “picture frames”) provide only a very narrow FOV as compared to “what the camera saw,” resulting in severe 3D distortion. When a stereoscopic image is viewed with a FOV that is narrower than the FOV of the camera, the “stretch” in the z-axis makes objects in the scene look smaller than real life, like a scale model, the scene does not look “real.”
    Their observations are mostly accurate, but their reason is wrong - the FOV mismatch is a problem for trying to achieve OSP, but when using a screen or print we aren't trying to achieve OSP and problems we face are not the same (and photography is generally not about reproducing the scene exactly anyway, so why should 3D photography be constrained to this?).

    We *don't care about FOV being realistic* - in fact, I would suggest intentionally using a wider FOV and moving closer to the subject to maximise the 3D effect. The problems we face are 1) a mismatched stereo projection that fails to take the maths involved with viewing an intermediate screen into account (and this is why I would love a chance to experiment with using tilt-shift lenses for stereo photography), and 2) inability to calibrate the stereo projection to the screen size and viewer's IPD (the only thing we can adjust is the parallax, which is a very poor substitute for this).

    OSP has it's place, and I hope to see this reproduced in VR (which should solve the FOV mismatch issue the paper is about and what he mentions about looking in the same direction the photo was taken - if they start adding accelerometers to 3D cameras a VR setup could easily display the photo in the correct orientation), but it is a completely different effect to what we produce in 3D Vision, and I'm not convinced that "more realistic" is always "better"... however 3D cameras can't do what we do with 3D Vision so perhaps it may generally be better for them.

    2x Geforce GTX 980 in SLI provided by NVIDIA, i7 6700K 4GHz CPU, Asus 27" VG278HE 144Hz 3D Monitor, BenQ W1070 3D Projector, 120" Elite Screens YardMaster 2, 32GB Corsair DDR4 3200MHz RAM, Samsung 850 EVO 500G SSD, 4x750GB HDD in RAID5, Gigabyte Z170X-Gaming 7 Motherboard, Corsair Obsidian 750D Airflow Edition Case, Corsair RM850i PSU, HTC Vive, Win 10 64bit

    Alienware M17x R4 w/ built in 3D, Intel i7 3740QM, GTX 680m 2GB, 16GB DDR3 1600MHz RAM, Win7 64bit, 1TB SSD, 1TB HDD, 750GB HDD

    Pre-release 3D fixes, shadertool.py and other goodies: http://github.com/DarkStarSword/3d-fixes
    Support me on Patreon: https://www.patreon.com/DarkStarSword or PayPal: https://www.paypal.me/DarkStarSword

    #15
    Posted 02/04/2016 12:32 AM   
      1 / 2    
    Scroll To Top