I do the former a lot because I often don't want tonality adjustments to affect saturation. I find it awkward to work on the a and b axes, and if one simply wants to remove tonality from color adjustments, one can use a color blend in RGB space. I don't do the second, for the most part. The second is the flip side: to be able to manipulate color separately from tonality, using the a and b axes rather than RGB. One is to separate tonality adjustments from color adjustments. If all this is correct, can it really make an appreciable difference?īTW, totally off topic, it seems to me that people have two main reasons for converting from ProPhoto to Lab in Photoshop. Lightroom does not: it will create the base layer in whatever the Photoshop working space is, presumably ProPhoto, and requires that you convert this to Lab as a subsequent step. ACR allows you to convert from Linear ProPhoto to Lab in the process of creating the Photoshop base layer. If this is all correct, the fork in the road comes at the point of moving the image into Photoshop. For purposes of viewing and creating a histogram, this is converted to a proprietary version of ProPhoto called "Melissa". The internal working space of this raw processing engine is called Linear ProPhoto RGB, which is basically ProPhoto but with a linear tonal response curve.ģ. Adobe has a single raw processing engine, onto which they put two front ends, LR and ACR.Ģ. Let me know if you think any of this is incorrect.ġ. This is both hijacking the thread and going beyond what I know, but.Īdobe is fairly closed-mouthed about the internal workings of its raw processor, but here is what I have gleaned from various sources, Adobe when possible. If we work with ACR, then we do a native conversion of the raw data and no colour loss.Īs an aside plugins and a lot of the filters that ship with Photoshop cannot be used with the L*a*b* (or CYMK colour spaces). If one wants to work in the L*a*b* colour space using the conversion in Photoshop means we are starting with some degree of data loss by going from RGB to L*a*b*. Lightroom (and all of the other mainstream raw processors I've looked at) can only export to one of the four RGB colour models that I listed. The only work around that I know of is to use the L*a*b* colour space that ACR exports to natively. The amount is dependent on both the colour content of the capture and the range of the colour space that has been used. Photoshop and other pixel based editors need to work on image data, so they must have a colour space assigned to them in order for the software to work, so by converting the raw data into an image file, some of the data could be discarded. One advantage touted by Lightroom (and other parametric editor) fans is that those tools have access to every single colour captured by the camera. There are many others, but they are not something we tend to use in photography. The computer types have come up with a number of ways of representing RGB and from a photography standpoint, there are currently four commonly used RGB colour models sRGB, DCI-P3 (the approach Apple uses on its most recent lines of displays), Adobe RGB and ProPhoto RGB. The photo printer industry even fools us into believing that their devices are RGB ones, so to a large extent most photographers think in RGB. All of our devices (excluding the colour printing process) are RGB devices, from cameras, to screens, to projectors. Dan - it is two things, a simplified workflow (sort of) and and deeper (potential) pool of colour data to work with.Ī decent modern digital camera is capable of capturing any colour a human can see, which for arguments sake is the CIELAB 1976 colour space.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |