Discussion:
[Lensfun-users] Canon PowerShot SD1100/IXUS 80
Jonathan Niehof
2016-07-04 16:31:19 UTC
Permalink
In short, the attached snippet contains full (distortion + TCA +
vignetting) lens information for the Canon Powershot SD1100 IS/IXUS
80.

Caveats: the distortion information is from the existing IXUS 80
information in the database. I added TCA and vignetting data to the
IXUS 80 and copied it to the SD1100. This is the older database format
since I'm still running lensfun 0.2.8 (Ubuntu 15.10) and wanted to
make sure it worked with my current darktable. This camera doesn't
have an adjustable aperture, but an ND8 filter; when the filter is in,
the camera reports the "effective" aperture in the EXIF data, i.e. an
aperture that would give the same brightness. I'm assuming the ND
filter doesn't affect the vignetting or TCA.

The very long:

I shot with CHDK to DNG 1.3 raws. This is supposed to embed the
badpixels.bin data (bad pixels reported from the camera firmware) into
DNG opcodes so it can be removed by the raw converter; however, I
found a lot of obvious dead pixels (e.g. cyan blotches) in the dcraw
converted raws. I pulled the CHDK-generated badpixels.bin and parsed
it into the format that dcraw uses for its -P option. I converted all
vignetting images with no interpolation ("dcraw -v -t 0 -4 -o 0 -M -D
foo.DNG") and checked the pgm for zero-value (totally dead) pixels.
All the vignetting images gave identical results, which I merged with
the badpixels.bin list, and then edited calibrate.py to add the
appropriate -P option to all dcraw calls. Python code for all this is
available.

I made a lenses.txt file with the distortion from the existing IXUS80
in lensfun.xml, but with a changed first line: "Canon PowerShot SD1100
IS: Canon, canonSD100IS, 5.9". I edited line 169 of calibrate.py to
default to "Canon PowerShot SD1100 IS" (matching the exif) instead of
"Standard".

For TCA, I made several attempts with buildings before giving up. I
bought two black-and-white checkered tablecloths on Amazon, 137x274cm,
and hung them up. I set up a tripod about 6-7m away. Because at the
shorter focal lengths the pattern didn't fill the frame, I did five
shots: one with the pattern in each corner (filling as much of the
frame as possible) and one with the camera brought closer (about 4m
for the shortest focal length) so the pattern filled the frame. At
longer focal lengths (filling the frame), I took a couple of exposures
with slight movements of the camera between to get some differences. I
shot at ISO 80 (base), exposure on automatic, auto focus, with a 2sec
timer to avoid vibrations from pressing the shutter button. I shot at
all six. All exposure were without the ND filter (although I didn't
force that.)

I ran calibrate.py against all the images with a one-, two-, and
three-parameter fits. For each focal length, I parsed the .tca file
and plotted distortion vs. radius (r and b) for all images, looking
for reasonable agreement between the images. I took the mean of
parameters for all images at each focal length to plot parameter as a
function of focal length, looking for smooth variation with focal
length. (Code available.) The three-parameter (bcv) fit has poor
agreement between images and noisy variation with focal length. At the
shorter focal lengths, the two-parameter (b and v) was clearly very
good. The images taken closer in were clear outliers and I threw them
out. At longer focal length, neither one- nor two-parameter had
perfect agreemnt, but the two- had better, and in all cases the
deviation from no correction was larger than the difference between
the images. So I used the means of the two-parameter fits.

I shot vignetting images against a white ceiling about 2m from the
camera, mounted on a tripod facing up. A lamp was positioned right
next to the tripod and emitting from just above lens level, to avoid
camera shadow. I sandwiched a sheet of Rosco #216 white diffusion gel
between the lens below and glass above (two pieces, a sort of
"floating picture frame" that I had handy.) Focus distance was set at
maximum: I did not bring an image at infinity to focus. Again 80ISO,
auto exposure, 2sec timer, no ND filter. I shot all focal lengths,
then rotated the camera and gel so that the relationship between
ceiling, camera, and gel were all different (lamp and ceiling remained
in the same orientation) before shooting another set.

Using gnuplot output from calibrate.py, I verifying vignetting images
were generally smooth (and threw out two other sets of vignetting
photos where I tried a different approach to holding the gel, which
had obvious defects.) The two images for each focal length provided
very similar outputs.

For the XML, I copied the IXUS 80 verbatim (both camera and lens),
then changed model tag to match Exif.Image.Model (which is the same as
Exif.Image.UniqueCameraModel), made the lang en a short version, and
named the mount similarly. I populated both the IXUS 80 and the copied
SD1100 lens calibration with the new TCA and vignetting data. The
vignetting already had two lines per focal length (near and far
distances); I added two more lines for smaller aperture, based on the
camera-reported effective aperture with the ND filter in. Note that,
as with the IXUS 80, the crop factor is 6.1 for the camera and 5.9 for
the lens (based perhaps on full area vs. JPEG area?) I haven't managed
to get darktable to automatically recognize even the camera from the
exiv, but it works if I manually select the camera and then the lens.

Next on the agenda is vignetting on the Canon EF-S18-135mm f/3.5-5.6
IS (Rebel T2i), then full cal for the Fujifilm XF-1. Then darktable-y
stuff, if anyone's interested: base curves and color matrices and such
(probably try for the zero/dead pixel correction in DT, extending the
hot pixel module.)
Roman Lebedev
2016-07-04 16:46:56 UTC
Permalink
Hi.
Post by Jonathan Niehof
In short, the attached snippet contains full (distortion + TCA +
vignetting) lens information for the Canon Powershot SD1100 IS/IXUS
80.
Caveats: the distortion information is from the existing IXUS 80
information in the database. I added TCA and vignetting data to the
IXUS 80 and copied it to the SD1100. This is the older database format
since I'm still running lensfun 0.2.8 (Ubuntu 15.10) and wanted to
make sure it worked with my current darktable. This camera doesn't
have an adjustable aperture, but an ND8 filter; when the filter is in,
the camera reports the "effective" aperture in the EXIF data, i.e. an
aperture that would give the same brightness. I'm assuming the ND
filter doesn't affect the vignetting or TCA.
I shot with CHDK to DNG 1.3 raws. This is supposed to embed the
badpixels.bin data (bad pixels reported from the camera firmware) into
DNG opcodes so it can be removed by the raw converter; however, I
found a lot of obvious dead pixels (e.g. cyan blotches) in the dcraw
converted raws. I pulled the CHDK-generated badpixels.bin and parsed
it into the format that dcraw uses for its -P option. I converted all
vignetting images with no interpolation ("dcraw -v -t 0 -4 -o 0 -M -D
foo.DNG") and checked the pgm for zero-value (totally dead) pixels.
All the vignetting images gave identical results, which I merged with
the badpixels.bin list, and then edited calibrate.py to add the
appropriate -P option to all dcraw calls. Python code for all this is
available.
I made a lenses.txt file with the distortion from the existing IXUS80
in lensfun.xml, but with a changed first line: "Canon PowerShot SD1100
IS: Canon, canonSD100IS, 5.9". I edited line 169 of calibrate.py to
default to "Canon PowerShot SD1100 IS" (matching the exif) instead of
"Standard".
For TCA, I made several attempts with buildings before giving up. I
bought two black-and-white checkered tablecloths on Amazon, 137x274cm,
and hung them up. I set up a tripod about 6-7m away. Because at the
shorter focal lengths the pattern didn't fill the frame, I did five
shots: one with the pattern in each corner (filling as much of the
frame as possible) and one with the camera brought closer (about 4m
for the shortest focal length) so the pattern filled the frame. At
longer focal lengths (filling the frame), I took a couple of exposures
with slight movements of the camera between to get some differences. I
shot at ISO 80 (base), exposure on automatic, auto focus, with a 2sec
timer to avoid vibrations from pressing the shutter button. I shot at
all six. All exposure were without the ND filter (although I didn't
force that.)
I ran calibrate.py against all the images with a one-, two-, and
three-parameter fits. For each focal length, I parsed the .tca file
and plotted distortion vs. radius (r and b) for all images, looking
for reasonable agreement between the images. I took the mean of
parameters for all images at each focal length to plot parameter as a
function of focal length, looking for smooth variation with focal
length. (Code available.) The three-parameter (bcv) fit has poor
agreement between images and noisy variation with focal length. At the
shorter focal lengths, the two-parameter (b and v) was clearly very
good. The images taken closer in were clear outliers and I threw them
out. At longer focal length, neither one- nor two-parameter had
perfect agreemnt, but the two- had better, and in all cases the
deviation from no correction was larger than the difference between
the images. So I used the means of the two-parameter fits.
I shot vignetting images against a white ceiling about 2m from the
camera, mounted on a tripod facing up. A lamp was positioned right
next to the tripod and emitting from just above lens level, to avoid
camera shadow. I sandwiched a sheet of Rosco #216 white diffusion gel
between the lens below and glass above (two pieces, a sort of
"floating picture frame" that I had handy.) Focus distance was set at
maximum: I did not bring an image at infinity to focus. Again 80ISO,
auto exposure, 2sec timer, no ND filter. I shot all focal lengths,
then rotated the camera and gel so that the relationship between
ceiling, camera, and gel were all different (lamp and ceiling remained
in the same orientation) before shooting another set.
Using gnuplot output from calibrate.py, I verifying vignetting images
were generally smooth (and threw out two other sets of vignetting
photos where I tried a different approach to holding the gel, which
had obvious defects.) The two images for each focal length provided
very similar outputs.
For the XML, I copied the IXUS 80 verbatim (both camera and lens),
then changed model tag to match Exif.Image.Model (which is the same as
Exif.Image.UniqueCameraModel), made the lang en a short version, and
named the mount similarly. I populated both the IXUS 80 and the copied
SD1100 lens calibration with the new TCA and vignetting data. The
vignetting already had two lines per focal length (near and far
distances); I added two more lines for smaller aperture, based on the
camera-reported effective aperture with the ND filter in. Note that,
as with the IXUS 80, the crop factor is 6.1 for the camera and 5.9 for
the lens (based perhaps on full area vs. JPEG area?)
I haven't managed
to get darktable to automatically recognize even the camera from the
exiv, but it works if I manually select the camera and then the lens.
Make sure that camera name in lensfun and in exiv and in dt's cameras.xml
match precisely.
I do not know what exif says, but cameras.xml says it should be:
Fujifilm XF1
Post by Jonathan Niehof
Next on the agenda is vignetting on the Canon EF-S18-135mm f/3.5-5.6
IS (Rebel T2i), then full cal for the Fujifilm XF-1.
Then darktable-y
stuff, if anyone's interested: base curves
Custom base/tone-curve are overrated, and we do not have any means
to verify their validity, so we do not really merge those any more.
Post by Jonathan Niehof
and color matrices and such
(probably try for the zero/dead pixel correction in DT, extending the
hot pixel module.)
No, hot pixel module does not need any per-camera calibration.

That camera should be basically supported, except:
1. no raw sample on rawsamples.ch
2. no wb presets
3. no noise profile

Custom color matrix is overrated too, we do not really merge those.

(https://www.darktable.org/resources/camera-support/)

Roman.
Post by Jonathan Niehof
------------------------------------------------------------------------------
Attend Shape: An AT&T Tech Expo July 15-16. Meet us at AT&T Park in San
Francisco, CA to explore cutting-edge tech and listen to tech luminaries
present their vision of the future. This family event has something for
everyone, including kids. Get more information and register today.
http://sdm.link/attshape
_______________________________________________
Lensfun-users mailing list
https://lists.sourceforge.net/lists/listinfo/lensfun-users
Torsten Bronger
2016-07-05 10:29:47 UTC
Permalink
Hallöchen!
Post by Jonathan Niehof
In short, the attached snippet contains full (distortion + TCA +
vignetting) lens information for the Canon Powershot SD1100 IS/IXUS
80.
Thank you very much for the very thorough work!
Post by Jonathan Niehof
Caveats: the distortion information is from the existing IXUS 80
information in the database. I added TCA and vignetting data to the
IXUS 80 and copied it to the SD1100.
But I think the old distortion data referred to JPEG imags, which
may have seen already anti-distortion. Dos the data also work fpr
DNG images?
Post by Jonathan Niehof
This is the older database format since I'm still running lensfun
0.2.8 (Ubuntu 15.10) and wanted to make sure it worked with my
current darktable.
AFAICS, there is no difference between both Lensfun version in this
case.
Post by Jonathan Niehof
This camera doesn't have an adjustable aperture, but an ND8
filter; when the filter is in, the camera reports the "effective"
aperture in the EXIF data, i.e. an aperture that would give the
same brightness.
Do both cameras have that?
Post by Jonathan Niehof
I'm assuming the ND filter doesn't affect the vignetting or TCA.
If at all, it should be negligible. It may have a slight prism
effect for the corners, but let's ignore that.
Post by Jonathan Niehof
[...]
I made a lenses.txt file with the distortion from the existing IXUS80
in lensfun.xml, but with a changed first line: "Canon PowerShot SD1100
IS: Canon, canonSD100IS, 5.9". I edited line 169 of calibrate.py to
default to "Canon PowerShot SD1100 IS" (matching the exif) instead of
"Standard".
Matching the EXIF of the camera model or the lens model? Typically,
the lens model field is empty for compact cameras. (I don't know
for CHDK, though.)
Post by Jonathan Niehof
For TCA, [...]
I shot vignetting [...]
Just to be sure: Both TCA and vignetting refer to DNGs, correct?
Post by Jonathan Niehof
For the XML, I copied the IXUS 80 verbatim (both camera and lens),
then changed model tag to match Exif.Image.Model (which is the
same as Exif.Image.UniqueCameraModel), made the lang en a short
version, and named the mount similarly. I populated both the IXUS
80 and the copied SD1100 lens calibration with the new TCA and
vignetting data.
If both data is the same, I would give the SD1100 the IXUS 80 mount.
This way, the first simply re-uses all calibrations for the second.
Post by Jonathan Niehof
The vignetting already had two lines per focal length (near and
far distances); I added two more lines for smaller aperture, based
on the camera-reported effective aperture with the ND filter
in. Note that, as with the IXUS 80, the crop factor is 6.1 for the
camera and 5.9 for the lens (based perhaps on full area vs. JPEG
area?)
Actually, bot must be the same Otherwise, your calibration data
would not be applied accurately.

Tschö,
Torsten.
--
Torsten Bronger Jabber ID: ***@jabber.rwth-aachen.de
Jonathan Niehof
2016-07-05 23:37:55 UTC
Permalink
On Tue, Jul 5, 2016 at 6:29 AM, Torsten Bronger
Post by Torsten Bronger
But I think the old distortion data referred to JPEG imags, which
may have seen already anti-distortion. Dos the data also work fpr
DNG images?
Well, it looks "reasonable" although perhaps over-corrected. Only one
way to find out; I'll run a set of distortion images and see.

If they're different that does raise the question of how you would
like JPG vs. RAW represented in the lensfun database.
Post by Torsten Bronger
Do both cameras have that?
The SD1100 and IXUS80 are identical, it's just the US market has to
have its own special name.
Post by Torsten Bronger
Matching the EXIF of the camera model or the lens model? Typically,
the lens model field is empty for compact cameras. (I don't know
for CHDK, though.)
For the camera. The lens model field is definitely empty, even out of
CHDK. I *think* I have it working now, one of those "look away and
next time I try, it works" problems.
Post by Torsten Bronger
Just to be sure: Both TCA and vignetting refer to DNGs, correct?
Yes, I shot everything in DNG.
Post by Torsten Bronger
If both data is the same, I would give the SD1100 the IXUS 80 mount.
This way, the first simply re-uses all calibrations for the second.
Makes sense.
Post by Torsten Bronger
Post by Jonathan Niehof
Note that, as with the IXUS 80, the crop factor is 6.1 for the
camera and 5.9 for the lens (based perhaps on full area vs. JPEG
area?)
Actually, bot must be the same Otherwise, your calibration data
would not be applied accurately.
Hmm. I wonder why the existing data says 5.9. 1/2.5" sensor is
nominally a 6.02 crop factor. Comparing the FocalLength to
FocalLengthIn35mmFilm in the EXIF gives anything from 6.02 to 6.13.
DefaultCropSize and the JPG images are 3264x2448; DNG is 3298x2470,
matching dimensions from ActiveArea, and ImageWidth x ImageLength is
3336 x 2480. 6.02 to 5.9 is roughly the ratio of 6.02 to 5.9, so the
question is which of these sizes is the nominal 1/2.5".
Jonathan Niehof
2016-07-25 23:24:45 UTC
Permalink
Apologies for how long it took to get back to this; it was a fair bit
of heavy lifting and life got in the way...

Details are in https://github.com/jtniehof/photo (lenses/sd1100),
although I wouldn't recommend cloning it, just browse to what looks
interesting. If somebody could take a look and make sure it seems
reasonable, then I'll make a pull request against lensfun.

I set the database up with the SD1100 using the IXUS80 mount and
replaced all the values for the IXUS 80 lens. I used 6.02 for the crop
factor; 6.02/5.9 seems to be the ratio of ImageWidth to
DefaultCropSize

The OOC JPG appear to be completely uncorrected, so there is that at
least. However the distortion parameters currently in the database are
dubious, particularly for 8.3mm, with Rd/Ru = 1.5 at the center of the
image!

There's a nice tutorial for manual lens calibration, similar to
Torsten's screencast but with the new interface, at
http://hugin.sourceforge.net/tutorials/calibration/en.shtml . However,
I think I got the best results with the automatic tool documented at
http://wiki.panotools.org/Calibrate_lens_gui
Torsten Bronger
2016-07-29 16:24:16 UTC
Permalink
Hallöchen!
Post by Jonathan Niehof
[...]
The OOC JPG appear to be completely uncorrected, so there is that
at least. However the distortion parameters currently in the
database are dubious, particularly for 8.3mm, with Rd/Ru = 1.5 at
the center of the image!
Definitely dubious. BTW, this is very old data from the beginnings
of Lensfun.
Post by Jonathan Niehof
There's a nice tutorial for manual lens calibration, similar to
Torsten's screencast but with the new interface, at
http://hugin.sourceforge.net/tutorials/calibration/en.shtml .
Actually, this is the origin of the method that I recommend.
Post by Jonathan Niehof
However, I think I got the best results with the automatic tool
documented at http://wiki.panotools.org/Calibrate_lens_gui
Well, I accept data created with this tool in Lensfun's database but
I don't recommend it. In most cases, getting lines running across
the whole image *in one segment* is tricky. Mostly, the tool can
detect only multiple fragments of lines. One can play with the
parameters, or one can pre-edit the image (enhance contrast) but
then the convenience advantage is lost. YMMV.

Tschö,
Torsten.
--
Torsten Bronger Jabber ID: ***@jabber.rwth-aachen.de


------------------------------------------------------------------------------
Jonathan Niehof
2016-07-29 20:33:50 UTC
Permalink
On Fri, Jul 29, 2016 at 12:24 PM, Torsten Bronger
Post by Torsten Bronger
Well, I accept data created with this tool in Lensfun's database but
I don't recommend it. In most cases, getting lines running across
the whole image *in one segment* is tricky. Mostly, the tool can
detect only multiple fragments of lines. One can play with the
parameters, or one can pre-edit the image (enhance contrast) but
then the convenience advantage is lost. YMMV.
Absolutely YMMV and something to use with care. I found that even with
tuning parameters it was much faster than the manual method, and I
needed the contrast enhancement for my own eyes. The manual method
also seemed fairly sensitive to exactly how careful I was in line
placement (3 lines x 50 control point pairs each takes a long time!)
and tended to diverge badly at small R. My thought (and you've been
doing this a lot longer than I!) is that it's worth getting some good
data through the centre of the image just to help anchor the fit
there...otherwise degrees of freedom in the fit will go into fitting
little wiggles of the periphery rather than keeping things reasonable
at the center. Limiting to poly3 may have the same effect for lenses
where that's appropriate.

Incidentally, am I correct in assuming that despite the order of
operations in lensfun itself
(http://lensfun.sourceforge.net/manual/corrections.html), each element
of the calibration process is independent? I.e. I don't need to have
the correct distortion parameters in place to do vignetting or TCA?
Didn't look like it in the script....

------------------------------------------------------------------------------
Torsten Bronger
2016-07-31 07:03:28 UTC
Permalink
Hallöchen!
Post by Jonathan Niehof
[...]
Absolutely YMMV and something to use with care. I found that even
with tuning parameters it was much faster than the manual method,
and I needed the contrast enhancement for my own eyes.
And you really had only one fragment? (Sometimes this is only
visible if you switch lines on and off.)
Post by Jonathan Niehof
The manual method also seemed fairly sensitive to exactly how
careful I was in line placement (3 lines x 50 control point pairs
each takes a long time!) and tended to diverge badly at small
R. My thought (and you've been doing this a lot longer than I!) is
that it's worth getting some good data through the centre of the
image just to help anchor the fit there...otherwise degrees of
freedom in the fit will go into fitting little wiggles of the
periphery rather than keeping things reasonable at the
center. Limiting to poly3 may have the same effect for lenses
where that's appropriate.
You are right in all points here. Therefore, I want that second
line 1/3 from the centre. It can never be perfect, of course,
because there's a tradeoff: Too close to the centre, and the
deviation from a straigt line is too small to give senseful
information. Too far away from it, and the centre cannot be
extrapolated accurately.

However, I don't see how the GUI tool helps with that. Besides,
more than 50% of the images really cannot be detected automatically
because the lines are interrupted and/or heavily blurred and/or
dark.

The only better method is a picture of an equidistant grid or rule,
and taking the equidistance into account in the fit. But this is
only feasible for tele lenses, given the minimal distance.
Post by Jonathan Niehof
Incidentally, am I correct in assuming that despite the order of
operations in lensfun itself
(http://lensfun.sourceforge.net/manual/corrections.html), each
element of the calibration process is independent? I.e. I don't
need to have the correct distortion parameters in place to do
vignetting or TCA? Didn't look like it in the script....
Ordering is significant. Starting with the camera image, vignetting
is corrected first. Then, TCA. Then, distortion. Therefore, the
script can measure vignetting accurately in the raw image. TCA is
also measured in the raw image, which is slightly inaccurate -- it
should be vignetting-corrected first. But the dependece should be
absolutely negligible since both aberrations happen on a very
different frequency domain.

And as for distortion, one should use only the green channel for
control point placement. But life has to offer so much more than
control point placement. ;-)

Tschö,
Torsten.
--
Torsten Bronger Jabber ID: ***@jabber.rwth-aachen.de


------------------------------------------------------------------------------
Loading...