The Nextbit Robin looks like your standard smartphone, but its cool blue exterior houses the first real cloud-based phone. It's an innovative device, but unfortunately its camera falls short of the best in its class. DxOMark has put the Robin through its standard mobile tests, awarding it 81 points and putting it in 18th place in DxO's mobile rankings. While image testers liked the Robin's good detail preservation and fast AF, unusually high noise levels kept NextBit's smartphone out of the higher echelon's of the DxO rankings.
Among the features introduced in Nikon's new D5 and D500 DSLRs, we're very excited by automated AF Fine Tune. This feature allows users to quickly fine-tune their specific camera bodies and lenses, maximising the chances of a sharp shot and avoiding the lengthy process of trial-and-error tuning that was previously necessary. Watch our video and read our in-depth analysis.
What's the problem?
If you're a DSLR shooter, you may be acutely aware of consistent front or back-focus issues with some of your lenses, particularly fast ones like F1.4 primes. Mirrorless users tend to not have such issues, because their cameras focus using their image sensors. When a mirrorless camera says it's achieved focus, generally it's actually in focus. That doesn't necessarily hold true with DSLRs, which use a secondary phase-detect sensor under the mirror as a sort of proxy for focus at the imaging plane. This makes DSLR focus sensitive to misalignments between the secondary AF module and the image sensor, and also requires calibration of the optics inside the module itself. Furthermore, the way these phase-detect AF modules makes them sensitive to certain lens aberrations, like spherical aberration.
Manufacturers of DSLR bodies and lenses do a lot of calibrations to make sure that this isn’t an issue, calibrating every AF point at the factory, writing look-up tables into lenses, and more. But the reality of tolerances is such that you’ll be best off if you calibrate your particular copy of a lens and your particular copy of a body yourself. That’s what AF Fine Tune, or AF micro-adjustment as Canon calls it, is all about.
State of the current art...
Up until now, this calibration procedure has required cumbersome procedures for accurate calibration. We'd often set a camera up on a tripod and align it to a LensAlign (which has a sighting tool), then have to change the set up to test different subject distances, lighting, or lenses. Some photographers even try to Fine Tune on the spot by trying different values and seeing if a real-world target looks sharper or not - but this method is extremely prone to error. Solutions like FoCal have tried to automate the procedure, but again, the requirement of a chart and a computer is cumbersome.*
Nikon's new automated AF Fine Tune makes things as easy as child's play. It uses contrast-detect AF in live view, which focuses using the image sensor and is nearly always accurate, to calibrate its own phase-detect AF system. Watch our video above to get an idea of just how easy it is to calibrate your lenses on the new D5 and D500 cameras.
A couple of things are worth keeping in mind. For some lenses and systems, the optimal calibration value can change for different subject distances. This isn't necessarily always the case, but you may wish to calibrate for the subject distances you're most likely to shoot for any particular lens. For a good all-round calibration, we're told that using a target approximately 40x the focal length away strikes a good balance.
The key here is to play around a bit. Try a couple different distances, a few different runs, and make sure you're getting a consistent result. Sometimes we've found the optimal value to change with lighting temperature, but this sort of thing is precisely why the automated procedure is so valuable: if you're running into trouble with focus, you can - right at the wedding reception you're shooting - set the camera on a table, point it at a static object, and calibrate your camera in under 10 seconds. Yeah, we timed ourselves.
Here's an example of how Fine Tune helped calibrate our Nikon 24/1.8 to our D5. Roll your mouse over the 'OFF' and 'ON' buttons to see Sam's eye sharpen up. If you click on the main image, you can see the full image in a separate window, where you'll notice that the 'OFF' shot is front-focused on Sam's nose, while the 'ON' shot is focused correctly on his eye. We placed a single AF point over Sam's left eye (on camera right) for focus in both cases.
AF Fine Tune OFF
(focused on nose)
AF Fine Tune ON
(focused on eye)
In this case, for this lens paired to this body, automated AF Fine Tune found a value of +14 was best. This indicates that for correct focus, the camera has to shift focus backward an arbitrary 14 units from the focus reading the phase-detect sensor makes. In other words, out of the box, this lens on our D5 front-focuses. If it had back-focused out-of-the-box by a similar amount, we might have expected the automated procedure to find -14 to be the optimal value.
How we'd like to see this feature evolve
AF Fine Tune currently only writes one global value per lens. This means the calibration value can't be adjusted for either end of a zoom. Furthermore, only the center point can be calibrated - the camera assumes that the calibration at the factory ensures all points are consistent with one another and, importantly, the center point. Finally, as mentioned earlier, sometimes the optimal value can change based on subject distance.
Canon cameras currently at least offer to microadjustment values for either end of a zoom, but don't offer any sort of automation to help you out. Sigma and Tamron USB docks allow for calibration at either end of the zoom, and for 3 to 4 different subject distance ranges, allowing for a high degree of accuracy of calibration. Unfortunately, entering 4 different subject distance ranges for two ends of a zoom mean the user has to literally set up the camera 8 times, with some sort of test target for accurate assessment - hardly practical for most working photographers.
The key here is automation: automating opens up a world of opportunities, and automated Fine Tune is an important first step. We'd even imagine a future implementation where calibration data for all focus points is stored and learned from over time. Every time you calibrate a particular point, the camera could retain subject distance information (passed on to it via the lens), and over time learn the best calibration values for each point, for all subject distances, for different temperatures and lighting as well (the latter are often minor concerns).
To sum up...
Nikon's automated AF Fine Tune is truly one of the most welcome features we've seen added to a DSLR in recent times. We've wondered for years why camera companies don't use their contrast-detect AF to self-calibrate their phase-detect systems, instead relegating calibration to a cumbersome end-user experience.
Automated Fine Tune changes all that. It’s a really useful feature that takes a lot of guesswork and cumbersome aspects of calibrating yourself out of the equation, allowing you to do it on the spot, at an event, anywhere, on the fly. In fact, anyone working with shallow depth-of-field imagery should absolutely perform this procedure. Wedding, newborn, portrait, lifestyle, photojournalist, and even sports photographers: take note.
* We really like Reikan FoCal for research purposes though: you get a plethora of data for how a body/lens combination behaves at different subject distances, on different days, under different lighting, and even a map of the optimal calibration value per AF point. Of course, since you can only enter one global adjustment value into your camera, this information is a bit more academic, but if you want to get an idea of the behavior of your system, there's probably no more comprehensive tool than FoCal.
Production studio and early VR adopter Condition One has created its own rugged VR camera for internal use called 'Bison,' a name which references the first thing the company’s founder Danfung Dennis recorded with an early prototype. The camera won't be put up for sale, and will instead be used to help create future VR features.
The company showcased Bison at NAB recently; the rig features a total of 16 cameras that produce 360-degree stereoscopic 3D videos with 3D positional audio. Videos are recorded at 48 fps with a combined 5.7K resolution. According to Condition One’s website, Bison can shoot footage at distances as close as 60cm/2ft, has a 2 hour recording time, a thermal management system, custom aluminum rig, custom carbon fiber tripod, remote trigger with a 792m/2600ft range and tablet control.
Final footage is created using Condition One's proprietary 3D 360 stitching algorithms and software; the company describes the process as being 'a fully automated production pipeline' that it claims is the fastest and highest quality in the industry. Companies and teams interested in creating movies with Bison will need to team up with the studio to gain access.
|Photo by Jeff Keller|
The Panasonic Lumix DMC-GX85 / GX80 takes just about everything we like about the GX8 and crams it into a body size that's a lot more in line with the older GX7. In the shrinking process, you lose the high-res tilting viewfinder, the new 20MP sensor and weather sealing. But don't think you're getting a bad deal. You gain Panasonic's Dual-IS feature while recording 4K video (and you still get incredibly effective 'dual' in-body and in-lens stabilization in stills as well), you get an updated 16MP chip that now lacks an anti-aliasing filter, and there's a new JPEG mode dubbed L. Monochrome.
We've taken a pre-production GX85 with us around the Puget Sound region with a variety of lenses to see how it measures up.
Let's cut it out with the paintbrush analogies while we're at it, too
To the point...
Quick and to the point: that's the reasoning behind the use of linear focus motors, but it's less true of the latest blog post on the subject, over on LensRentals.com. That's what we love about the crew's in-depth teardowns. In their latest post they tear apart a series of linear drive lenses and discuss the various designs they've encountered. Some are pretty robust and others, well, take a look for yourself...
The need for new designs
The ring-type focus motors [pictured above] that were traditionally the default choice for high-end DSLR lenses are not especially well suited to the needs of mirrorless cameras or video shooting. Contrast detection autofocus requires not just being able to move a focus group quickly but also the ability to stop it, then drive it back in the other direction, all with high precision. Video requires silent and carefully-controlled focus drive, to allow smooth refocusing while the camera is recording. These different requirements have prompted the adoption of new types of focus motors.
Linear electromagnetic motors
Among the more popular alternatives to ring-type drive is the linear motor, which features a permanent magnet and a coil of wire that, when electricity is run through it, slides along a bar parallel with the magnet. In principle these fulfill the things demanded of them: fast, precise and quiet (we've been very impressed by how fast some of the linear motor lenses we've used can be).
Surprisingly, the internet has very few good diagrams of these designs, but you can sometimes recognize lenses that use this type of motor because the focus element rattles around when the camera is switched off. This is because in many linear motor lenses the focus element is only held in position when power is being provided to the focus coil - the rest of the time, the focus carriage can just slide up and down its rails. This isn't true of the Sony and Zeiss designs that much of the blog post discusses - these appear to have some sort of brake to stop this disconcerting behavior.
Rattle and, er, break
Generally we don't worry too much about this rattling, but perhaps we should. LensRental's experience with large numbers of hard-worked lenses reveals that not all linear motor designs are the same. Early Sony motors attach the moving coil to the focus element carriage with just a single blob of glue. Oddly enough, this can fail; leaving the coil racing up and down the rail but with the focus element uncoupled. Later designs do a better job of securing the moving coil to the carriage, prompting Roger Cicala to define two categories within lenses of this kind: Type 1 motors and Type 1a designs that are very similar but don't break so readily.
No right answer
As well as highlighting a failure mechanism of poor designs, Cicala and Co's teardowns hint at a fundamental shortcoming of linear motor's capabilities. Fujifilm's use of two, three and four linear motors in some lens designs suggests that they struggle to move large, heavy lens elements quickly, taking a brute-force approach.
This is also likely to explain why Sony adopted three different focus drive technologies (linear electromagnetic motor, piezoelectric direct drive and ring-type motors, sometimes in combination) in its recently-announced GM series of lenses: because there isn't yet a single technology that provides all the necessary characteristics in a way that works for all lens designs.
Results, not technologies
Like LensRentals, we've seen very different results between the best and the worst examples of each lens motor type, which is why we try to concentrate on performance, rather than technology, when we write about lenses. We've also been lucky not to experience any of the motor failures (perhaps better described as motor detachments), that LensRentals has seen, but it's interesting to see the designs of lenses improve as manufacturers become more experienced at using each technology. Or, as in the case of the Sony 70-200mm F2.8 pictured here, a mixture of technologies.
We also hope Cicala makes good on his promise to look at other emerging focus technologies, and the ways in which they're developing, in the coming weeks.
|Lytro debuted its Cinema prototype to an eager crowd at NAB 2016 in Las Vegas, NV. It sports the highest resolution video sensor ever made.|
Lytro greeted a packed showroom at NAB 2016 in Las Vegas, Nevada to demo its prototype Lytro Cinema camera and platform, as well as debut footage shot on the system. To say we're impressed from what we saw would be an understatement: Lytro may be poised to change the face of cinema forever.
The short film 'Life', containing footage shot both on Lytro Cinema as well as an Arri Alexa, demonstrated some of the exciting applications of light field in video. Directed by Academy Award winner Robert Stromberg and shot by VRC Chief Imaging Scientist David Stump, 'Life' showcased the ability of light field to obviate green screens, allowing for extraction of backgrounds or other scene elements based off of depth information, and seamless integration of CGI elements into scenes. Lytro calls it 'depth screening', and the effect looked realistic to us.
Just as exciting was the demonstration of a movable virtual camera in post: since the light field contains multiple perspectives, a movie-maker can add in camera movement at the editing stage, despite using a static camera to shoot. And we're not talking about a simple pan left/right, up/down, or a simple Ken Burns effect... we're talking about actual perspective shifts. Up, down, left, right, back and forth, even short dolly movements - all simulated by moving a virtual camera in post, not by actually having to move the camera on set. To see the effect, have a look at our interview with Ariel Braunstein of Lytro, where he presents a camera fly-through from a single Lytro Illum shot (3:39 - 4:05):
The Lytro Cinema is capable of capturing these multiple perspectives because of 'sub-aperture imaging'. Head of Light Field Video Jon Karafin explains that in front of the sensor sits a microlens array consisting of millions of small lenses similar to what traditional cameras have. The difference, though, is that there is a 6x6 pixel array underneath each microlens, meaning that the image made up of only pixels on the sensor at any position (X,Y) underneath a microlens represents the scene as seen through one portion, or 'sub-aperture' of the lens. There will be 36 of these 'sub-aperture' images though, each providing one of 36 different perspectives, which then allows for computational reconstruction of the image with all the benefits of light field.
The 36 different perspectives affords you some freedom of movement in moving a virtual camera in post, but it is of course limited, affected by considerations like lens, focal length, and subject distance. It's not clear yet what that range of freedom is with the Cinema, but what we saw in the short film was impressive, something cinematographers will undoubtedly welcome in place of setting up motion rigs for small camera movements. Even from a consumer perspective, consider what auto-curation of user-generated content could do with tools like these. Think Animoto on steroids.
We've focused on depth screening and perspective shift, but let's not forget all the other benefits light field brings. The multiple perspectives captured mean you can generate 3D images or video from every shot at any desired parallax disparity (3D filmmakers often have to choose their disparity on-set, only able to optimize for one set of viewing conditions). You can focus your image after the fact, which saves critical focus and focus approach (its cadence) for post.* Selective depth-of-field is also available in post: you can choose whether you want shallow, or extended, depth-of-field, or even transition from selective to extensive depth-of-field in your timeline. You can even isolate shallow or extended depth-of-field to different objects in the scene using focus spread: say F5.6 for a face to get it all in focus, but F0.3 for the rest of the scene.
Speaking of F0.3 (yes, you read that right), light field allows you to simulate faster (and smaller) apertures previous thought impossible in post, which in turn places fewer demands on lens design. That's what allowed the Illum camera to house a 30-250mm equiv. F2.0 constant aperture lens in relatively small and lightweight body. You could open that aperture up to F1.0 in post, and at the demo of Cinema at NAB, Lytro impressed its audience with - we kid you not - F0.3 depth-of-field footage. A Lytro representative claimed even faster apertures can be simulated.
But all this doesn't come without a cost: the Lytro Cinema appears massive, and rightfully so. A 6x6 pixel array underneath each microlens means there are 36 pixels for every 1 pixel on a traditional camera; so to maintain spatial resolution, you need to grow your sensor, and your total number of pixels. Which is exactly what Lytro did - the sensor housing appeared to our eyes to be over a foot in width, sporting a whopping 755 million total pixels. That should mean that at worst, you'd get 755/36, or roughly 21MP final video output. Final output resolution was a concern with previous Lytro cameras: the Illum yielded roughly 5MP equivalent (sometimes worse) stills from a 40MP sensor. However, as we understand it, the theoretical lowest resolution of 21MP with the Cinema sensor means that output resolution shouldn't be a concern for 4K, or even higher-res, video.**
The optics appear as massive as the resolution, but that's partly because there are two optical paths: one for the 755MP light field capture, and the other to give the cinematographer a live preview for framing, focus, and exposure. The insane data rates for the light field capture, on the order of terabytes for every few seconds, means that Lytro Cinema comes with its own server on-set. The sensor is also actively cooled. The total unit lives on rails on wheels, so forget hand-held footage - for now. Bear in mind though, the original technicolor cinematic camera invented back in 1932 appeared similarly gargantuan, and Lytro specifically mentioned that different versions of Cinema are planned, some smaller in size.
Processing all that data isn't easy - in fact, no mortal laptop or desktop need apply. Lytro is partnering with Google to send footage to the cloud, where thousands of CPUs crunch the data and provide you real-time proxies for editing. Lytro stated the importance of integration with existing workflows, and to that end is building plug-ins to allow for light field video editing within existing editors - starting with Nuke.
The 4K footage from the Lytro Cinema that was mixed with Arri Alexa footage to create the short 'Life', viewed from our seating position, appeared comparable to what one might expect from professional cinema capture. CEO Jason Rosenthal commented that the short film was shot on both cameras to speak to how interchangeable footage can be with other cameras. Importantly, the footage appeared virtually noise free - which one might expect of such a large sensor area. Furthermore, Jon Karafin pointed out there are 'hundreds of input samples for every one output sample', which means a significant amount of noise averaging occurs, yielding a clean image, and a claimed 16 stops of dynamic range. In fact, in 'Life', noise had to be added back in to get the Lytro footage to match the Alexa.
That's incredibly impressive, given all the advantages light field brings. This may be the start of something incredibly transformative for the industry. After all, who wouldn't want the option for F0.3 depth-of-field with perfect focus in post, adjustable shutter angle and frame rate, compellingly real 3D imagery when paired with a light field display, and more? With increased capabilities for handling large data bandwidths, larger sensors, and more pixels, we think some form of light field will exist perhaps in most cameras of the future. Particularly when it comes to virtual reality capture, which Lytro also intends to disrupt with Immerge.
It's admirable just how far Lytro has come in such a short while, and we can't wait to see what's next. For more information, visit Lytro Cinema.
* If it's anything like the Illum, though, some level of focusing will still be required on set, as there are optimal planes of refocus-ability.
** We're not certain of the actual trade-off for the current Lytro Cinema. It's correlated to the number of pixels underneath each microlens, and effective resolution can vary at different focal planes, or change based on where focus was placed. This may be one reason for the overkill resolution - to ensure that at worst, capture is high resolution enough to meet high demands.
Nikon has announced delays of some recently announced compacts, including the DL-series compacts, the Coolpix A300/A900, B500/B700 and the KeyMission 360. In a statement issued today, Nikon also indicates that its part suppliers in the Kumamoto Prefecture affected by recent earthquakes are experiencing delays which will have an inevitable impact on production across much of its product range, but it's unclear to what degree the revised shipping dates are related. Sony appears to be one of those affected suppliers, as its sensor production is currently shut down, and a Fujifilm subsidiary that produces LCD components may also have a trickle-down effect.
The Nikon DL18-50, DL24-85 and DL24-500 1"-sensor compacts were originally scheduled for a June release, and a new shipping date has yet to be determined. Nikon cites 'serious issues with the integrated circuit for image processing' as the cause for the delay.
According to Nikon, the Coolpix A300 and B500 will be delayed until May 2016, and the Coolpix A900 and B700 are pushed back until July 2016. All four were originally scheduled for an April release. The news is worse for the KeyMission 360 action cam. Originally expected this spring, it won't ship until October 2016.
Update on digital camera release
April 20, 2016 TOKYO - Nikon Corporation announced today delays in the release of new digital cameras and the effects of the 2016 Kumamoto earthquakes.
Delays in the release of new digital cameras
The new Nikon compact digital cameras, COOLPIX A300 and B500 will be available in May 2016, the COOLPIX A900 and B700 will arrive in July 2016 and the Nikon KeyMission 360 action camera will be available in October 2016 as more time is required for software adjustment.
The new COOLPIX products were originally scheduled for release in April and the KeyMission 360 action camera was announced for a spring 2016 release.
In addition, the premium compact cameras, Nikon DL18-50 f/1.8-2.8, DL24-85 f/1.8-2.8, and DL24-500 f/2.8-5.6, will be delayed due to the serious issues with the integrated circuit for image processing built into the three new premium compact cameras, originally scheduled for a June 2016 release.
The new release date has yet to be determined and we will announce the information as soon as it is decided.
The effects of the 2016 Kumamoto earthquakes
The suppliers of parts for Nikon products such as digital cameras with interchangeable lenses, interchangeable lenses, and compact digital cameras, which include those mentioned above, were affected by the series of earthquakes that started on April 14 in Kumamoto Prefecture in Japan, and this will inevitably impact our production and sales.
We are currently investigating the situation, and we will announce the details as soon as they are confirmed.
We sincerely apologize to our customers, business partners and all those who have expressed interest in these models for the delays. We are making every effort to bring these models to market at the earliest possible date without compromising on our standards and the total Nikon product experience.
The major earthquakes that struck Japan on April 14th and 15th have closed Sony's Kumamoto factory, which primarily manufactures sensors for digital cameras. Due to ongoing aftershocks and inspections of the buildings and manufacturing equipment, it's not clear when the Kumamoto factory will be back in business.
The company's factories in Isahaya City and Oita City were shuttered briefly, but have since resumed normal operations. Sony says that the impact on its financials is 'currently being evaluated.'
Nikon says that it too may see production delays as a result of suppliers affected by the earthquakes (Sony is a known supplier of Nikon's sensors).
Status of Sony Group Manufacturing Operations Affected by 2016 Kumamoto Earthquakes
(Tokyo, April 18, 2016) Sony Corporation ("Sony") extends its deepest sympathies to all those affected by the earthquakes in Kumamoto.
Due to the earthquake of April 14 and subsequent earthquakes in the Kumamoto region, the following Sony Group manufacturing sites have been affected:
Operations at Sony Semiconductor Manufacturing Corporation's Kumamoto Technology Center (located in Kikuchi Gun, Kumamoto Prefecture), which primarily manufactures image sensors for digital cameras and security cameras as well as micro-display devices, were halted after the earthquake on April 14, and currently remain suspended. Damage to the site's building and manufacturing lines is currently being evaluated, and with aftershocks continuing, the timeframe for resuming operations has yet to be determined.
Although some of the manufacturing equipment at Sony Semiconductor Manufacturing Corporation's Nagasaki Technology Center (located in Isahaya City, Nagasaki Prefecture), which is Sony's main facility for smartphone image sensor production, and Oita Technology Center (located in Oita City, Oita Prefecture), which commenced operations as a wholly-owned facility of Sony Semiconductor Manufacturing Corporation on April 1, had been temporarily halted, the affected equipment has been sequentially restarted from April 17, and production has resumed. Sony Semiconductor Manufacturing Corporation's Kagoshima Technology Center (located in Kirishima City, Kagoshima Prefecture) has continued its production operations after the earthquakes, and there have been no major effects on its operations.
Sony has confirmed the safety of all of its and its group companies' employees in the region affected by the earthquakes.
The impact of these events on Sony's consolidated results is currently being evaluated.