精华内容
下载资源
问答
  • 用一个网络实现曝光不足和曝光过度的曝光修正:Learning Multi-Scale Photo Exposure Correction
    2022-01-18 06:02:25

    Learning Multi-Scale Photo Exposure Correction

     

    [pdf] [Github]

    目录

    Abstract

    1. Introduction

    2. Related Work

    3. Our Dataset

    4. Our Method

    4.1. Coarse-to-Fine Exposure Correction

    4.2. Coarse-to-Fine Network


    Abstract

    Capturing photographs with wrong exposures remains a major source of errors in camera-based imaging. Exposure problems are categorized as either: (i) overexposed, where the camera exposure was too long, resulting in bright and washed-out image regions, or (ii) underexposed, where the exposure was too short, resulting in dark regions. Both under- and overexposure greatly reduce the contrast and visual appeal of an image. Prior work mainly focuses on underexposed images or general image enhancement.

    In contrast, our proposed method targets both over- and underexposure errors in photographs. We formulate the exposure correction problem as two main sub-problems: (i) color enhancement and (ii) detail enhancement. Accordingly, we propose a coarse-to-fine deep neural network (DNN) model, trainable in an end-to-end manner, that addresses each subproblem separately. A key aspect of our solution is a new dataset of over 24,000 images exhibiting the broadest range of exposure values to date with a corresponding properly exposed image.

    Our method achieves results on par with existing state-of-the-art methods on underexposed images and yields significant improvements for images suffering from overexposure errors.

    研究背景:

    在基于相机的成像中,用错误的曝光捕捉照片仍然是错误的主要来源。曝光问题分为两类:(i) 过度曝光,即相机曝光时间过长,导致图像区域明亮和褪色;(ii) 曝光不足,即曝光时间过短,导致图像区域暗。曝光不足和过度都会大大降低图像的对比度和视觉吸引力。以前的工作主要集中在曝光不足的图像或一般的图像增强。

    研究方法:

    与此相反(In contrast 衔接,突出本文的独特性和创新性),本文所提出的方法在照片中同时针对过曝光和欠曝光误差。本文将曝光校正问题描述为两个主要的子问题:(i) 色彩增强和 (ii) 细节增强。因此,本文提出了一个由粗到细的深度神经网络(DNN) 模型,以端到端方式可训练,分别解决每个子问题。本文解决方案的一个关键方面是一个包含 24,000 多张图像的新数据集,它展示了迄今为止最广泛的曝光值范围和相应的适当曝光图像。

    研究结果:

    本文的方法在曝光不足的图像上取得了与现有最先进的方法相同的结果,并对遭受曝光过度错误的图像产生了显著的改善。

    1. Introduction

    The exposure used at capture time directly affects the overall brightness of the final rendered photograph. Digital cameras control exposure using three main factors: (i) capture shutter speed, (ii) f-number, which is the ratio of the focal length to the camera aperture diameter, and (iii) the ISO value to control the amplification factor of the received pixel signals. In photography, exposure settings are represented by exposure values (EVs), where each EV refers to different combinations of camera shutter speeds and f-numbers that result in the same exposure effect—also referred to as ‘equivalent exposures’ in photography.

    Digital cameras can adjust the exposure value of captured images for the purpose of varying the brightness levels. This adjustment can be controlled manually by users or performed automatically in an auto-exposure (AE) mode. When AE is used, cameras adjust the EV to compensate for low/high levels of brightness in the captured scene using through-the-lens (TTL) metering that measures the amount of light received from the scene [49].

    曝光技术介绍:

    在捕捉时间使用的曝光直接影响最终渲染照片的整体亮度。数码相机利用三个主要因素控制曝光:(i) 捕捉快门速度,(ii) f-number,即焦距与相机孔径的比值,(iii) ISO值来控制接收到的像素信号的放大系数。在摄影中,曝光设置由曝光值 (EV) 表示,每个 EV 指的是相机快门速度和 f-numbers 的不同组合,从而产生相同的曝光效果,在摄影中也被称为 “等效曝光

    数码相机可以调整捕捉到的图像的曝光值,以改变亮度级别。这种调整可以由用户手动控制,也可以在自动曝光 (AE) 模式下自动执行。当使用 AE 时,相机通过测量从场景接收到的光量的通过镜头 (TTL) 测光来调整 EV 来补偿捕获场景中的低/高亮度。相关的专业技术是本文必须的,如果只停留在图像层面,很难有信服力。

    [49] Bryan Peterson. Understanding exposure: How to shoot great photographs with any camera. AmPhoto Books, 2016.

    Exposure errors can occur due to several factors, such as errors in measurements of TTL metering, hard lighting conditions (e.g., very low lighting and backlighting), dramatic changes in the brightness level of the scene, and errors made by users in the manual mode. Such exposure errors are introduced early in the capture process and are thus hard to correct after rendering the final 8-bit image. This is due to the highly nonlinear operations applied by the camera image signal processor (ISP) afterwards to render the final 8-bit standard RGB (sRGB) image [31].

    Fig. 1 shows typical examples of images with exposureerrors. In Fig. 1, exposure errors result in either very bright image regions, due to overexposure, or very dark regions, caused by underexposure errors, in the final rendered images. Correcting images with such errors is a challenging task even for well-established image enhancement software packages, see Fig. 9. Although both over- and underexposure errors are common in photography, most prior work is mainly focused on correcting underexposure errors [23, 56, 58, 65, 66] or generic image quality enhancement [11, 18].

    曝光错误导致的因素及对成像的影响:

    曝光错误可能由以下几个因素引起,如 TTL 测光的测量错误、硬光照条件 (例如,非常低的照明和背光)、场景亮度的剧烈变化以及用户在手动模式下产生的错误。这样的曝光错误是在捕获过程的早期引入的,因此在渲染最后的 8 位图像后很难纠正。这是由于相机图像信号处理器 (ISP) 之后对最终的 8 位标准 RGB (sRGB) 图像进行了高度非线性的处理。

    图 1 为曝光误差图像的典型例子。在图 1 中,在最终渲染的图像中,曝光错误导致的要么是由于过度曝光导致的非常亮的图像区域,要么是由于曝光不足导致的非常暗的图像区域。即使对于已经成熟的图像增强软件包来说,校正带有此类错误的图像也是一项具有挑战性的任务,如图 9 所示。尽管曝光过低和曝光过低误差在摄影中都很常见,但之前的大部分工作主要集中在纠正曝光不足误差或一般的图像质量增强。

    Figure 1: Photographs with over- and underexposure errors and the results of our method using a single model for exposure correction. These sample input images are taken from outside our dataset to demonstrate the generalization of our trained model. 

    Figure 9: Comparisons with commercial software packages. The input images are taken from Flickr.

    Contributions

    We propose a coarse-to-fine deep learning method for exposure error correction of both over- and underexposed sRGB images. Our approach formulates the exposure correction problem as two main sub-problems: (i) color and (ii) detail enhancement. We propose a coarse-tofine deep neural network (DNN) model, trainable in an endto-end manner, that begins by correcting the global color information and subsequently refines the image details.

    In addition to our DNN model, a key contribution to the exposure correction problem is a new dataset containing over 24,000 images1 rendered from raw-RGB to sRGB with different exposure settings with broader exposure ranges than previous datasets. Each image in our dataset is provided with a corresponding properly exposed reference image.

    Lastly, we present an extensive set of evaluations and ablations of our proposed method with comparisons to the state of the art. We demonstrate that our method achieves results on par with previous methods dedicated to underexposed images and yields significant improvements on overexposed images. Furthermore, our model generalizes well to images outside our dataset.

    贡献:
    (方法方面)本文提出了一种由粗到细的深度学习方法,用于曝光过度和曝光不足的 sRGB 图像的曝光误差校正。该方法将曝光校正问题定义为两个主要的子问题 :(i) 颜色和 (ii) 细节增强。本文提出了一个由粗到细的深度神经网络 (DNN) 模型,以端到端方式进行训练,该模型首先纠正全局颜色信息,然后细化图像细节。

    (数据方面)本文曝光校正问题的一个关键贡献是一个新的数据集,包含超过 24,000 张图像,从 raw-RGB 到sRGB,具有不同的曝光设置,曝光范围比以前的数据集更大该数据集中的每幅图像都提供了相应的适当曝光的参考图像

    (效果方面)最后,本文提出了一套广泛的评估和消融实验,并与目前的技术进行了比较。实验证明,本文的方法可以达到与以前的方法相同的结果,专门用于曝光不足的图像,并在曝光过度的图像上产生显著的改进。此外,本文的模型可以很好地推广到数据集之外的图像(泛化性能好)。

    2. Related Work

    感兴趣的读者,请阅读博客 [ 曝光校准相关工作:Related Work of Exposure Correction ]

    3. Our Dataset

    To train our model, we need a large number of training images rendered with realistic over- and underexposure errors and corresponding properly exposed ground truth images. As discussed in Sec. 2, such datasets are currently not publicly available to support exposure correction research. For this reason, our first task is to create a new dataset. Our dataset is rendered from the MIT-Adobe FiveK dataset [6], which has 5,000 raw-RGB images and corresponding sRGB images rendered manually by five expert photographers [6].

    For each raw-RGB image, we use the Adobe Camera Raw SDK [1] to emulate different EVs as would be applied by a camera [53]. Adobe Camera Raw accurately emulates the nonlinear camera rendering procedures using metadata embedded in each DNG raw file [2, 53]. We render each raw-RGB image with different digital EVs to mimic real exposure errors. Specifically, we use the relative EVs −1.5, −1, +0, +1, and +1.5 to render images with underexposure errors, a zero gain of the original EV, and overexposure errors, respectively. The zero-gain relative EV is equivalent to the original exposure settings applied onboard the camera during capture time.

    As the ground truth images, we use images that were manually retouched by an expert photographer (referred to as Expert C in [6]) as our target correctly exposed images, rather than using our rendered images with +0 relative EV. The reason behind this choice is that a significant number of images contain backlighting or partial exposure errors in the original exposure capture settings. The expert adjusted images were performed in ProPhoto RGB color space [6] (rather than raw-RGB), which we converted to a standard 8-bit sRGB color space encoding.

    为了训练模型,需要大量具有真实曝光过低误差的训练图像和相应的正确曝光的 GT 图像。然而,这些数据集目前还不能公开用于支持曝光校正研究。由于这个原因,本文的第一个任务是创建一个新的数据集。本文的数据集是由 MIT-Adobe FiveK 数据集渲染的,该数据集有 5000 张 raw-RGB 图像和相应的 sRGB 图像,由 5 位专业摄影师手工渲染

    对于每个 raw-RGB 图像,使用 Adobe Camera Raw SDK [1] 来模拟不同的 EV,就像相机所应用的那样。Adobe Camera Raw 使用嵌入在每个 DNG Raw 文件中的元数据精确地模拟非线性摄像机渲染过程。本文用不同的数字电动汽车渲染每个原始 RGB 图像来模拟真实的曝光误差。具体来说,使用相对 EV−1.5、−1、+0、+1 和 +1.5 分别渲染曝光不足、原始 EV 增益为零和曝光过差的图像。零增益相对 EV 相当于相机在捕捉时间内应用的原始曝光设置。

    对于 GT 图像,本文使用由专业摄影师 (在 [6] 中称为 expert C) 手工修饰的图像作为目标正确曝光的图像,而不是使用相对 EV 为 +0 的渲染图像。这一选择背后的原因是,在原始曝光捕捉设置中,大量图像包含背光或部分曝光错误。经过专家调整的图像在 ProPhoto RGB 颜色空间(而不是 raw-RGB)中执行,本文将其转换为标准的 8 位 sRGB 颜色空间编码。

    In total, our dataset contains 24,330 8-bit sRGB images with different digital exposure settings. We discarded a small number of images that had misalignment with their corresponding ground truth image. These misalignments are due to different usage of the DNG crop area metadata by Adobe Camera Raw SDK and the expert. Our dataset is divided into three sets: (i) training set of 17,675 images, (ii) validation set of 750 images, and (iii) testing set of 5,905 images. The training, validation, and testing sets do not share any scenes in common. Fig. 2 shows examples of our generated 8-bit sRGB images and the corresponding properly exposed 8-bit sRGB reference images.

    本文的数据集包含 24,330 张 8 位 sRGB 图像,具有不同的数字曝光设置。本文丢弃了少量与它们对应的 GT 图像有偏差的图像。这些偏差是由于 Adobe Camera Raw SDK 和专家对 DNG 作物面积元数据的不同使用。

    本文的数据集分为3个集: (i) 17,675幅图像的训练集,(ii) 750 幅图像的验证集,(iii) 5,905 幅图像的测试集训练、验证和测试集不共享任何共同的场景。图 2 展示了我们生成的 8 位 sRGB 图像和相应的 8 位 sRGB 参考图像的示例。

    Figure 2: Dataset overview. Our dataset contains images with different exposure error types and their corresponding properly exposed reference images. Shown is a t-SNE visualization [42] of all images in our dataset and the lowlight (LOL) paired dataset (outlined in red) [58]. Notice that LOL covers a relatively small fraction of the possible exposure levels, as compared to our introduced dataset. Our dataset was rendered from linear raw-RGB images taken from the MIT-Adobe FiveK dataset [6]. Each image was rendered with different relative exposure values (EVs) by an accurate emulation of the camera ISP processes. 

    4. Our Method

    4.1. Coarse-to-Fine Exposure Correction

    Let X represent the Laplacian pyramid of I with n levels, such that X(l) is the l th level of X. The last level of this pyramid (i.e., X(n) ) captures low-frequency information of I, while the first level (i.e., X(1)) captures the highfrequency information. Such frequency levels can be categorized into: (i) global color information of I stored in the low-frequency level and (ii) image coarse-to-fine details stored in the mid- and high-frequency levels. These levels can be later used to reconstruct the full-color image I.

    Fig. 3 motivates our coarse-to-fine approach to exposure correction. Figs. 3-(A) and (B) show an example overexposed image and its corresponding well-exposed target, respectively. As observed, a significant exposure correction can be obtained by using only the low-frequency layer (i.e., the global color information) of the target image in the Laplacian pyramid reconstruction process, as shown in Fig. 3-(C). We can then improve the final image by enhancing the details in a sequential way by correcting each level of the Laplacian pyramid, as shown in Fig. 3-(D). Practically, we do not have access to the properly exposed image in Fig. 3-(B) at the inference stage, and thus our goal is to predict the missing color/detail information of each level in the Laplacian pyramid.Given an 8-bit sRGB input image, I, rendered with the incorrect exposure setting, our method aims to produce an output image, Y, with fewer exposure errors than those in I. As we simultaneously target both over- and underexposed errors, our input image, I, is expected to contain regions of nearly over- or under-saturated values with corrupted color and detail information. We propose to correct color and detail errors of I in a sequential manner. Specifically, we process a multi-resolution representation of I, rather than directly dealing with the original form of I. We use the Laplacian pyramid [4] as our multiresolution decomposition, which is derived from the Gaussian pyramid [5] of I.

    Inspired by this observation and the success of coarseto-fine architectures for various other computer vision tasks (e.g., [14, 33, 41, 54]), we design a DNN that corrects the global color and detail information of I in a sequential manner using the Laplacian pyramid decomposition. The remaining parts of this section explain the technical details of our model (Sec. 4.2), including details of the losses (Sec. 4.3), inference phase (Sec. 4.4), and training (Sec. 4.5).

    问题描述和方法形成的动机:摘要和前言中提到的 ‘ 色彩增强和细节增强’ 以及 ‘由粗到细’ 的模型,是通过拉普拉斯金字塔实现的。

    设 X 表示图像 I 的拉普拉斯金字塔,有 n 层,X(l) 是 X 的第 l 层。金字塔的最后一层 (即 X(n)) 捕获了 I 的低频信息,而第一级 (即 X(1)) 捕获了高频信息。这些频率级可分为: (i)存储在低频级的全局颜色信息,(ii)存储在中高频级的图像粗细节信息。这些层次可以稍后用来重建全彩图像 I。

    图 3 展示从粗到细的曝光校正方法的动机。

    Figure 3: Motivation behind our coarse-to-fine exposure correction approach. Example of an overexposed image and its corresponding properly exposed image shown in (A) and (B), respectively. The Laplacian pyramid decomposition allows us to enhance the color and detail information sequentially, as shown in (C) and (D), respectively.

    图 3-(A) 和 (B) 分别为过曝光图像和对应的良好曝光目标。可以看出,在拉普拉斯金字塔重建过程中,仅使用目标图像的低频层 (即全局颜色信息) 就可以得到显著的曝光校正,如图 3-(C) 所示。

    然后,可以通过纠正拉普拉斯金字塔的每一层,以顺序的方式增强细节,从而改进最终的图像,如图 3-(D) 所示。

    实际上,在推理阶段,无法获得图 3-(B) 中适当曝光的图像,因此本文的目标是预测拉普拉斯金字塔中每个层次缺失的颜色/细节信息。

    给定一个在不正确的曝光设置下渲染的 8 位 sRGB 输入图像 I,本文的方法旨在产生一个比 I 的曝光误差更少的输出图像 Y。因为同时针对了曝光过度和曝光不足的错误,输入图像 I 中接近过度或欠饱和值的区域,可能包含颜色和细节信息损坏的。

    本文提出按顺序纠正 I 的颜色和细节错误。具体来说,本文处理 I 的多分辨率表示,而不是直接处理 I 的原始形式。本文使用拉普拉斯金字塔作为我们的多分辨率分解(通过高斯金字塔实现)。

    受到这一观察结果以及其他各种计算机视觉任务中从粗到细架构成功的启发 (例如 [14,33,41,54]),本文设计了一个 DNN,该 DNN 使用拉普拉斯金字塔分解以顺序的方式校正 I 的全局颜色和细节信息。

    [14] Deep generative image models using a Laplacian pyramid of adversarial networks. In NeurIPS, 2015. 

    [33] Deep Laplacian pyramid networks for fast and accurate super-resolution. In CVPR, 2017.

    [41] Efficient and fast real-world noisy image denoising by combining pyramid neural network and two-pathway unscented Kalman filter. IEEE Transactions on Image Processing, 29(1):3927–3940, 2020.

    [54] SinGAN: Learning a generative model from a single natural image. In ICCV, 2019.

    4.2. Coarse-to-Fine Network

    Our image exposure correction architecture sequentially processes the n-level Laplacian pyramid, X, of the input image, I, to produce the final corrected image, Y. The proposed model consists of n sub-networks. Each of these sub-networks is a U-Net-like architecture [52] with untied weights. We allocate the network capacity in the form of weights based on how significantly each sub-problem (i.e., global color correction and detail enhancement) contributes to our final result.

    Fig. 4 provides an overview of our network. As shown, the largest (in terms of weights) subnetwork in our architecture is dedicated to processing the global color information in I (i.e., X(n) ). This sub-network (shown in yellow in Fig. 4) processes the low-frequency level X(n) and produces an upscaled image Y(n) . The upscaling process scales up the output of our sub-network by a factor of two using strided transposed convolution with trainable weights.

    Next, we add the first mid-frequency level X(n−1) to Y(n) to be processed by the second subnetwork in our model. This sub-network enhances the corresponding details of the current level and produces a residual layer that is then added to Y(n) +X(n−1) to reconstruct image Y(n−1), which is equivalent to the corresponding Gaussian pyramid level n − 1. This refinement-upsampling process proceeds until the final output image, Y, is produced. Our network is fully differentiable and thus can be trained in an end-to-end manner. Additional details of our network are provided in the supplementary materials.

    本文的图像曝光校正体系结构依次处理输入图像 I 的 n 层拉普拉斯金字塔 X,以生成最终的校正图像 Y。该模型由 n 个子网络组成。这些子网络中的每一个都是具有松散权值的 U-Net 类架构。根据每个子问题 (即全局颜色校正和细节增强) 对最终结果的贡献程度,以权重的形式分配网络容量。

    Figure 4: Overview of our image exposure correction architecture. We propose a coarse-to-fine deep network to progressively correct exposure errors in 8-bit sRGB images. Our network first corrects the global color captured at the final level of the Laplacian pyramid and then the subsequent frequency layers. 

    图 4 提供了网络的概览。如图所示,在该体系结构中,最大的 (按权重计算) 子网络用于处理 I (即 X(n)) 中的全局颜色信息。这个子网络 (如图 4 中黄色部分所示) 处理低频电平 X(n),并生成一个放大的图像 Y(n)。升级过程使用带可训练权值的跨步转置卷积,将子网络的输出扩大到原来的 2 倍。

    接下来,将第一个中频电平 X(n−1) 添加到 Y(n),由该模型中的第二个子网处理。该子网络增强了当前层次的相应细节,并产生了一个剩余层,然后将其添加到 Y(n) +X(n−1) 来重建图像 Y(n−1),相当于对应的高斯金字塔层 n−1。这个细化采样过程一直进行到最终的输出图像 Y 产生为止。

    ------------------------------------------------------------

    后面三节内容较简单,请参阅原文。

    4.3. Losses

    4.4. Inference Stage

    4.5. Training Details

    最后,贴几个实验结果。不得不说,本文的实验做的也是相当充分!

    Figure 7: Qualitative results of correcting images with exposure errors. Shown are the input images from our test set, results from the DPED [26], results from the Deep UPE [11], our results, and the corresponding ground truth images. 

    Table 1: Quantitative evaluation on our introduced test set. The best results are highlighted with green and bold. The second- and third-best results are highlighted in yellow and red, respectively. We compare each method with properly exposed reference image sets rendered by five expert photographers [6]. For each method, we present peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM) [67], and perceptual index (PI) [3]. We denote methods designed for underexposure correction in gray. Non-deep learning methods are marked by ∗. The terms U and S stand for unsupervised and supervised, respectively. Notice that higher PSNR and SSIM values are better, while lower PI values indicate better perceptual quality.

    更多相关内容
  • 为了不断提高透明度,我们正在公开此代码,以帮助已批准的政府公共卫生部门了解如何使用Exposure Notifications API来构建其COVID-19应用程序。 阅读 入门 该项目使用Gradle构建系统。 要构建此代码,请使用...
  • Exposure

    2019-10-28 16:47:19
    Exposure
  • 该项目允许具有植根Android设备的任何人使用自定义/派生应用访问Exposure Notifications框架。 目的是为开发和调试国家级应用程序提供社区支持。 快速开始 在Android上启动服务器(确保使用与Pipfile中相同的版本-...
  • 1.曝光补偿(Exposure Compensation):曝光的对数调整,仅在指定了色调映射器时使用。0:无调整,-1:暗2倍,-2:暗4倍,1:亮2倍,2:亮4倍。 2.Min EV100:最小自动曝光适应,它的实现方式是选择一个曝光值,平均...
  • cv2_exposureFusion.py

    2019-06-24 10:45:39
    一种曝光图像融合方法,使用到了基于边缘保持平滑金字塔的多尺度曝光融合算法【涉及高斯金字塔、拉普拉斯金字塔】,以及写了一个简单的python曝光图像融合程序
  • Revocable Threshold Attribute-based Signature Scheme Against Signing Key Exposure
  • matlab对比实验代码使用多重曝光图像融合进行图像去雾 Matlab文章“基于自适应结构分解的人工图像融合方法进行图像除雾”的Matlab源代码摘要:雾度会严重影响室外图像的可见和视觉质量。 作为实践中的挑战,始终使用...
  • 3GPP标准协议中英文对照版-NEF北向接口-29522-g10(Network Exposure Function Northbound APIs).docx
  • Exposure Fusion (单图像对比度增强算符)。 Charles Hessel CMLA,ENS巴黎萨克莱 此实现是IPOL出版物的一部分: 《模拟曝光融合》 ,查尔斯·黑塞尔(Charles Hessel),在线9(2019)中的图像处理中。 在以下论文...
  • Multi-Exposure Image Fusion Adrian Galdran Signal Processing, 149: 135-147, Aug. 2018. PDF :遵循此 DOI :遵循此 该代码的融合部分来自: “曝光融合”, 汤姆·梅滕斯(Tom Mertens),简·考茨(Jan Kautz...
  • 细节增强的matlab代码多尺度曝光融合 2017年细节增强型多尺度曝光融合的matlab代码本文分为A和B部分。 零件代码:主要用于融合不同的曝光图像,此处的代码基于参考文献[12]和[13]进行了一些修改。...
  • Alien Skin Exposure 6.x通用的完整汉化补丁,软件汉化为中文界面使用起来方便多了,安装Alien Skin Exposure后,将汉化补丁解压缩,如果是WIN7系统,就把Exposure 6这个文件夹覆盖到C:\Users\All Users\...
  • 每次打开新选项卡时,都可以从“曝光”获取漂亮的照片。 支持语言:English (United States)
  • Exposure fusion based on sparse representation using approximate K-SVD
  • Fast Multi-exposure Image Fusion with Median Filter and Recursive Filter, http://blog.sciencenet.cn/blog-366840-709637.html
  • 如果您想购买一些Exposure,请将ETH发送给众包合同,您将上收到EXPO。 如果您担心合同的所有者以不同的汇率提前执行交易,则可以调用buyTokensAtRate函数。 建造 要构建合同,请安装solc ,然后: cd contracts ...
  • 曝光校正 实现 Yuan 等人的“消费者照片的自动曝光校正”中介绍的曝光校正算法。
  • matlab分时代码长时间曝光融合(EEF) 扩展曝光融合的Octave / Matlab实现,这是一种针对实际包围曝光序列的改进的曝光融合。 Charles Hessel CMLA,ENS巴黎萨克莱 此方法与IPOL出版物相关: 扩展曝光融合,Charles ...
  • 3GPP标准协议中英文对照翻译-29523-g00(Policy Control Event Exposure Service).docx
  • ​ 本博客对曝光校准的相关工作进行简单总结,内容选自 2021CVPR 文章:Learning Multi-Scale Photo Exposure Correction. ​

    本博客对曝光校准的相关工作进行简单总结,内容选自 2021 CVPR 文章:Learning Multi-Scale Photo Exposure Correction

    博客:https://blog.csdn.net/u014546828/article/details/122552236 

    Related Work on the Exposure Correction

    原文:Learning Multi-Scale Photo Exposure Correction

    The focus of our paper is on correcting exposure errors in camera-rendered 8-bit sRGB images. We refer the reader to [9, 24, 25, 38] for representative examples for rendering linear raw-RGB images captured with low-light or exposure errors.

    本文的重点是校正相机渲染的 8 位 sRGB 图像的曝光误差。请读者参考 [9,24,25,38],以获得具有低光或曝光错误的线性 raw-RGB 图像。

    [9] Learning to see in the dark. In CVPR, 2018.

     [24] Burst photography for high dynamic range and low-light imaging on mobile cameras. ACM Transactions on Graphics (TOG), 35(6):1–12, 2016.

    [25] Exposure: A white-box photo postprocessing framework. ACM Transactions on Graphics (TOG), 37(2):26:1–26:17, 2018.

    [38Handheld mobile photography in very low light. ACM Transactions on Graphics (TOG), 38(6):1–16, 2019.

    Exposure Correction

    Traditional methods for exposure correction and contrast enhancement rely on image histograms to adjust image intensity values [8, 19, 36, 50, 69]. Alternatively, tone curve adjustment is used to correct images with exposure errors. This process is performed by relying either solely on input image information [63] or trained deep learning models [21, 46, 48, 62]. The majority of prior work adopts the Retinex theory [34] by assuming that improperly exposed images can be formulated as a pixel-wise multiplication of target images, captured with correct exposure settings, by illumination maps. Thus, the goal of these methods is to predict illumination maps to recover the well-exposed target images. Representative Retinex-based methods include [23, 29, 34, 44, 57, 64, 65] and the most recent deep learning ones [56, 58, 66]. Most of these methods, however, are restricted to correcting underexposure errors [23,56,58–60,65,66,68]. In contrast to the majority of prior work, our work is the first deep learning method to explicitly correct both overexposed and underexposed photographs with a single model.

    传统的曝光校正和对比度增强方法依赖于图像直方图来调整图像强度值。或者,色调曲线调整被用来校正曝光误差的图像。这一过程要么完全依赖输入的图像信息,要么依赖训练好的深度学习模型。之前的大部分工作都采用了 Retinex 理论,假设不恰当曝光的图像可以被表述为通过正确曝光设置捕获的目标图像的像素级乘法。因此,这些方法的目标是预测光照图,以恢复良好曝光的目标图像。代表性的基于 Retinex 的方法包括和最新的深度学习方法。然而,这些方法大多局限于校正曝光不足误差。与之前的大部分工作相比,我们的工作是第一种深度学习方法,通过单个模型明确地校正过度曝光和曝光不足的照片。

    [21] Zero-reference deep curve estimation for low-light image enhancement. In CVPR, 2020.

    [46DeepLPF: Deep local parametric filters for image enhancement. In CVPR, 2020. 

    [48] Distort-and-recover: Color enhancement using deep reinforcement learning. In CVPR, 2018.

    [62] DeepExposure: Learning to expose photos with asynchronously reinforced adversarial learning. In NeurIPS, 2018.

    [64] Dual illumination estimation for robust exposure correction. In Computer Graphics Forum, 2019. 

    [65] High-quality exposure correction of underexposed photos. In ACM MM, 2018.

    [56Underexposed photo enhancement using deep illumination estimation. In CVPR, 2019.

    [66] Kindling the darkness: A practical low-light image enhancer. In ACM MM, 2019.

    HDR Restoration and Image Enhancement

    HDR restoration is the process of reconstructing scene radiance HDR values from one or more low dynamic range (LDR) input images. Prior work either require access to multiple LDR images [16, 30, 43] or use a single LDR input image, which is converted to an HDR image by hallucinating missing information [15, 47]. Ultimately, these reconstructed HDR images are mapped back to LDR for perceptual visualization. This mapping can be directly performed from the input multi-LDR images [7,13], the reconstructed HDR image [61], or directly from the single input LDR image without the need for radiance HDR reconstruction [11, 18]. There are also methods that focus on general image enhancement that can be applied to enhancing images with poor exposure. In particular, work by [26, 27] was developed primarily to enhance images captured on smartphone cameras by mapping captured images to appear as highquality images captured by a DSLR. Our work does not seek to reconstruct HDR images or general enhancement, but instead is trained to explicitly address exposure errors.

    HDR 图像恢复是从一个或多个低动态范围 (LDR) 输入图像中重建场景亮度 HDR 值的过程。之前的工作要么需要访问多个 LDR 图像,要么使用单个 LDR 输入图像,通过幻想丢失信息而转换为HDR 图像 [15,47]。最终,这些重建的 HDR 图像被映射回 LDR 进行感知可视化。这种映射可以直接从输入的多 LDR 图像,重建的 HDR 图像进行,或直接从单输入的 LDR 图像进行,而不需要亮度 HDR 重建。还有一些专注于一般图像增强的方法,可以应用于增强曝光不良的图像。特别是,[26,27] 的工作主要是为了增强智能手机相机捕捉到的图像,将捕捉到的图像映射为数码单反相机捕捉到的高质量图像。本文的工作并不寻求重建 HDR 图像或一般增强,而是训练解决曝光错误。

    [15] HDR image reconstruction from a single exposure using deep CNNs. ACM Transactions on Graphics (TOG), 36(6):178:1–178:15, 2017.

    [7] Learning a deep single image contrast enhancer from multi-exposure images. IEEE Transactions on Image Processing, 27(4):2049–2062, 2018.

    [61] Image correction via deep reciprocating HDR transformation. In CVPR, 2018.

    [11] Deep photo enhancer: Unpaired learning for image enhancement from photographs with GANs. In CVPR, 2018.

    [18Deep bilateral learning for real-time image enhancement. ACM Transactions on Graphics (TOG), 36(4):118:1–118:12, 2017.

    [26] DSLR-quality photos on mobile devices with deep convolutional networks. In ICCV, 2017.

    Paired Dataset

    Paired datasets are crucial for supervised learning for image enhancement tasks. Existing paired datasets for exposure correction focus only on low-light underexposed images. Representative examples include Wang et al.’s dataset [56] and the low-light (LOL) paired dataset [58]. Unlike existing datasets for exposure correction, we introduce a large image dataset rendered with a wide range of exposure errors. Fig. 2 shows a comparison between our dataset and the LOL dataset in terms of the number of images and the variety of exposure errors in each dataset. The LOL dataset covers a relatively small fraction of the possible exposure levels, as compared to our introduced dataset. Our dataset is based on the MIT-Adobe FiveK dataset [6] and is accurately rendered by adjusting the high tonal values provided in camera sensor raw-RGB images to realistically emulate camera exposure errors. An alternative worth noting is to use a large HDR dataset to produce training data—for example, the Google HDR+ dataset [24]. One drawback, however, is that this dataset is a composite of a varying number of smartphone captured raw-RGB images that were first aligned to a composite raw-RGB image. The target ground truth image is based on an HDR-to-LDR algorithm applied to this composite raw-RGB image [18,24]. We opt instead to use the FiveK dataset as it starts with a single high-quality raw-RGB image and the ground truth result is generated by an expert photographer.

    成对数据集对于图像增强任务的监督学习至关重要。现有的曝光校正成对数据集只对弱光欠曝光图像进行聚焦。典型的例子包括 Wang et al. 的数据集和弱光 (LOL) 配对数据集。与现有的曝光校正数据集不同,本文引入了一个具有广泛曝光误差的大型图像数据集。图 2 显示了该数据集和 LOL 数据集在每个数据集的图像数量和曝光误差的变化情况。

    与本文引入的数据集相比,LOL 数据集覆盖了可能暴露水平的一个相对较小的部分。本文的数据集基于 MIT-Adobe FiveK 数据集 [6],通过调整相机传感器 raw-RGB 图像中提供的高色调值来精确渲染,以真实地模拟相机曝光误差。另一种值得注意的方法是使用大型 HDR 数据集生成训练数据——例如,谷歌 HDR+数据集 [24]。然而,该数据集的一个缺点是,该数据集是由不同数量的智能手机捕获的原始 rgb 图像合成的,这些图像首先对齐到一个复合的原始 rgb 图像。目标 ground truth 图像是基于 HDR-to-LDR 算法应用于复合 raw-RGB 图像。本文选择使用 FiveK 数据集,因为它从一张高质量的 raw-RGB 图像开始,ground truth 结果是由专业摄影师生成的。

    [56Underexposed photo enhancement using deep illumination estimation. In CVPR, 2019.

    [58] Deep Retinex decomposition for low-light enhancement. In BMVC, 2018.

    [6] Learning photographic global tonal adjustment with a database of input / output image pairs. In CVPR, 2011.

    [24] Burst photography for high dynamic range and low-light imaging on mobile cameras. ACM Transactions on Graphics (TOG), 35(6):1–12, 2016.

    其它相关论文:

    [1] Adobe. Color and camera raw. https://helpx. adobe.com/ca/photoshop- elements/using/ color-camera-raw.html. Accessed: 2020-11-12.

    [2] When color constancy goes wrong: Correcting improperly white-balanced images. In CVPR, 2019.

    [10] Bilateral guided upsampling. ACM Transactions on Graphics (TOG), 35(6):1–8, 2016.

     [15] Gabriel Eilertsen, Joel Kronander, Gyorgy Denes, Rafa Mantiuk, and Jonas Unger. HDR image reconstruction from a single exposure using deep CNNs. ACM Transactions on Graphics (TOG), 36(6):178:1–178:15, 2017.

    [16] Yuki Endo, Yoshihiro Kanamori, and Jun Mitani. Deep reverse tone mapping. ACM Transactions on Graphics (TOG), 36(6):177:1–177:10, 2017.

    [17] Xueyang Fu, Delu Zeng, Yue Huang, Xiao-Ping Zhang, and Xinghao Ding. A weighted variational model for simultaneous reflectance and illumination estimation. In CVPR, 2016.

    [22] Xiaojie Guo. LIME: A method for low-light image enhancement. In ACM MM, 2016. 6, 7, 8 [23] Xiaojie Guo, Yu Li, and Haibin Ling. LIME: Low-light image enhancement via illumination map estimation. IEEE Transactions on Image Processing, 26(2):982–993, 2017.

    [27] Andrey Ignatov, Nikolay Kobyshev, Radu Timofte, Kenneth Vanhoey, and Luc Van Gool. WESPE: Weakly supervised photo enhancer for digital cameras. In CVPR Workshops, 2018. 2

    [28] Yifan Jiang, Xinyu Gong, Ding Liu, Yu Cheng, Chen Fang, Xiaohui Shen, Jianchao Yang, Pan Zhou, and Zhangyang Wang. EnlightenGAN: Deep light enhancement without paired supervision. arXiv preprint arXiv:1906.06972, 2019. 8

    [29] Daniel J Jobson, Ziaur Rahman, and Glenn A Woodell. A multiscale Retinex for bridging the gap between color images and the human observation of scenes. IEEE Transactions on Image Processing, 6(7):965–976, 1997. 2

    [30] Nima Khademi Kalantari and Ravi Ramamoorthi. Deep high dynamic range imaging of dynamic scenes. ACM Transactions on Graphics (TOG), 36(4):144–1, 2017.

    [34] Edwin H Land. The Retinex theory of color vision. Scientific American, 237(6):108–129, 1977. 2

    [36] Chulwoo Lee, Chul Lee, and Chang-Su Kim. Contrast enhancement based on layered difference representation of 2D histograms. IEEE Transactions on Image Processing, 22(12):5372–5384, 2013.

    [40] Liqian Ma, Xu Jia, Qianru Sun, Bernt Schiele, Tinne Tuytelaars, and Luc Van Gool. Pose guided person image generation. In NeurIPS, 2017.

    [41] Ruijun Ma, Haifeng Hu, Songlong Xing, and Zhengming Li. Efficient and fast real-world noisy image denoising by combining pyramid neural network and two-pathway unscented Kalman filter. IEEE Transactions on Image Processing, 29(1):3927–3940, 2020.

    [42] Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-SNE. Journal of Machine Learning Research, 9:2579–2605, 2008.

    [43] Tom Mertens, Jan Kautz, and Frank Van Reeth. Exposure fusion: A simple and practical alternative to high dynamic range photography. In Computer Graphics Forum, 2009. 2, 5

    [44] Laurence Meylan and Sabine Susstrunk. High dynamic range image rendering with a Retinex-based adaptive filter. IEEE Transactions on Image Processing, 15(9):2820–2830, 2006.

    [46] Sean Moran, Pierre Marza, Steven McDonagh, Sarah Parisot, and Gregory Slabaugh. DeepLPF: Deep local parametric filters for image enhancement. In CVPR, 2020.

    [47] Kenta Moriwaki, Ryota Yoshihashi, Rei Kawakami, Shaodi You, and Takeshi Naemura. Hybrid loss for learning single-image-based HDR reconstruction. arXiv preprint arXiv:1812.07134, 2018.

    [48] Jongchan Park, Joon-Young Lee, Donggeun Yoo, and In So Kweon. Distort-and-recover: Color enhancement using deep reinforcement learning. In CVPR, 2018.

    [49] Bryan Peterson. Understanding exposure: How to shoot great photographs with any camera. AmPhoto Books, 2016. 1

    [51] Jonathan Ragan-Kelley, Connelly Barnes, Andrew Adams, Sylvain Paris, Fredo Durand, and Saman Amarasinghe. ´ Halide: A language and compiler for optimizing parallelism, locality, and recomputation in image processing pipelines. In ACM PLDI, 2013.

    [53] Jeff Schewe and Bruce Fraser. Real World Camera Raw with Adobe Photoshop CS5. Pearson Education, 2010.

    [54] Tamar Rott Shaham, Tali Dekel, and Tomer Michaeli. SinGAN: Learning a generative model from a single natural image. In ICCV, 2019. 

    [57] Shuhang Wang, Jin Zheng, Hai-Miao Hu, and Bo Li. Naturalness preserved enhancement algorithm for non-uniform illumination images. IEEE Transactions on Image Processing, 22(9):3538–3548, 2013.

    [59] Ke Xu, Xin Yang, Baocai Yin, and Rynson WH Lau. Learning to restore low-light images via decomposition-andenhancement. In CVPR, 2020. 2

    [60] Wenhan Yang, Shiqi Wang, Yuming Fang, Yue Wang, and Jiaying Liu. From fidelity to perceptual quality: A semisupervised approach for low-light image enhancement. In CVPR, 2020.

    [62] Runsheng Yu, Wenyu Liu, Yasen Zhang, Zhi Qu, Deli Zhao, and Bo Zhang. DeepExposure: Learning to expose photos with asynchronously reinforced adversarial learning. In NeurIPS, 2018.

    [63] Lu Yuan and Jian Sun. Automatic exposure correction of consumer photographs. In ECCV, 2012.

    [67] Zhou Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli. Image quality assessment: From error visibility to structural similarity. IEEE Transactions on Image Processing, 13(4):600–612, 2004.

    [68] Minfeng Zhu, Pingbo Pan, Wei Chen, and Yi Yang. EEMEFN: Low-light image enhancement via edgeenhanced multi-exposure fusion network. In AAAI, 2020.

    展开全文
  • 论文笔记:DeepFuse: A Deep Unsupervised Approach for Exposure Fusion with Extreme Exposure Image Pairs code:hli1221/Imagefusion_deepfuse: Image fusion based on deepfuse network - Tensorflow (based ...

    论文笔记:DeepFuse: A Deep Unsupervised Approach for Exposure Fusion with Extreme Exposure Image Pairs

    Abstract

    文章要点: 提出了一种新的用于融合静态多曝光图像的深度学习体系结构

    背景:

    • 目前的多曝光融合(MEF)方法使用手工制作的特征来融合输入序列。

      然而,弱手工制作的表示对变化的输入条件并不鲁棒。而且,它们在极端曝光的图像对上表现不佳。

      因此,非常希望有一种方法能够适应不同的输入条件,并且能够处理极端的曝光而不产生伪影。

    • 深层特征被认为对各种输入条件具有鲁棒性,并在有监督的设置中显示出惊人的表现。

      然而,在MEF中使用深度学习的障碍是缺乏足够的训练数据和提供ground truth用于监督。

    解决方法:

    • 收集了大量的multi-exposure image stacks的数据集用于训练
    • 为了避免ground truth images的需要,我们提出了一个无监督的MEF深度学习框架,使用无参考度量作为损失函数

    实验设置:

    • **CNN模型:**融合了从每幅图像中提取的一组常见的低级特征,从而产生了无伪影的、令人愉悦的视觉效果。
    • **实验评估:**进行了广泛的定量和定性评估,结果表明,对于各种自然图像,所提出的方法优于现有的最先进的方法。

    Introduction

    HDRI:

    • 高动态范围成像(HDRI)是一种摄影技术,有助于在不同光线条件下拍摄更好看的照片。它有助于存储人眼可感知的所有范围的光(或亮度),而不是使用相机获得的有限范围。

    MEF:

    • 目前流行的HDR图像生成方法称为多曝光融合(Multiple Exposure Fusion, MEF),它将一组具有不同曝光量的静态LDR图像(进一步称为曝光堆栈)融合成单个HDR图像。
    • 长曝光图像(用高曝光时间拍摄的图像)在黑暗区域具有更好的颜色和结构信息,而短曝光图像(用较少曝光时间拍摄的图像)在明亮区域具有更好的颜色和结构信息。

    现有方法的不足:

    • 需要比较多的LDR图像进行融合,以捕获场景的整个动态范围。当曝光堆栈中每个LDR图像之间的曝光偏差最小时,大多数MEF算法才工作得更好。这会导致更多的存储需求、处理时间和功耗。
    • 现有方法无法在图像上保持均匀的亮度

    **本文方法:**一种数据驱动的学习方法融合曝光括号静态图像对

    • 第一个使用深度CNN架构进行曝光融合的作品。
    • **模型架构:**初始层由一组滤波器组成,用于从每个输入图像对中提取常见的低级特征。融合输入图像对的这些低阶特征(low-level features)来重建最终的结果。
    • **模型训练:整个网络使用无参考图像质量损失函数(no-reference image quality loss function)**进行端到端训练。

    实验设置与评估:

    • 使用大量不同设置(室内/室外、日间/夜间、侧光/背光等)拍摄的曝光堆栈来训练和测试模型。
    • 模型不需要针对不同的输入条件进行参数微调。
    • 通过广泛的实验评估,证明了所提出的体系结构在广泛的输入场景中比最先进的方法表现得更好。

    文章贡献:

    • 一种基于CNN的无监督图像融合算法,用于融合曝光叠加静态图像对。
    • 一个新的基准数据集,可用于比较各种MEF方法。
    • 针对各种自然图像的7种最先进算法进行了广泛的实验评估和比较研究。

    Related Works

    • 基于权重图的融合方法

    • 边缘伪影问题以及一些提出的解决方法

    • 依赖于手工制作特征进行图像融合的方法的鲁棒性不强

    • CNN的优点与广泛应用

    Proposed Method

    一种基于CNN的图像融合框架

    数学定义:

    • input exposure sequence: I I I

    • fusion operator: O ( I ) O(I) O(I)

    • feed-forward process: F W ( I ) F_W(I) FW(I)

    • loss function: M E F    S S I M MEF\;SSIM MEFSSIM

      MEF SSIM基于结构相似性指数度量(SSIM)框架[27]。它利用输入图像序列中单个像素周围的补丁统计信息与结果进行比较。它测量结构完整性的损失以及多尺度下的亮度一致性

    流程图:

    在这里插入图片描述

    • 输入曝光堆栈转换为YCbCr颜色通道数据。

    • CNN用于融合输入图像的亮度变化。

      图像结构细节存在于亮度通道中,并且亮度通道中的亮度变化比色度通道中的亮度变化更显著。

    • 获得的亮度通道与使用加权融合方法生成的色度(Cb和Cr)通道相结合

    1、网络架构(DeepFuse CNN)

    • three components: 特征提取层、融合层、重建层
    • input images: the under-exposed and the over-exposed images ( Y 1 Y_1 Y1 and Y 2 Y_2 Y2)
    • **share same weights: **C11 and C12 (C21 and C22)
    • 网络设计(共享权值)的优点:
      • 强制网络从图像对学习到相同的特征。→ 融合层可以简单地组合各自的特征图。
      • 需要学习的滤波器数量减半
      • 网络的参数数量较少,因此收敛速度很快

    2、MEF SSIM loss function

    • { y k } = { k = 1 , 2 } \{y_k\}=\{k=1,2\} {yk}={k=1,2}: image patches extracted at a pixel location p p p from input image pairs

    • y f y_f yf: the patch extracted from CNN output fused image at same location p p p

    • 目标:计算一个score,用于表示 y k y_k yk y f y_f yf的融合性能

    • SSIM framework: any patch can be modelled using three components: structure (s), luminance (l) and contrast ©.

      • The given patch is decomposed into these three components:

        在这里插入图片描述

      • Desired contrast value(对比度越高,图像质量越好)

        在这里插入图片描述

      • The structure of the desired result:
        在这里插入图片描述

      • Desired result patch:
        在这里插入图片描述

        由于局部patch中的亮度差异不重要,因此从上述等式中丢弃亮度分量

      • The final image quality score for pixel p p p is calculated using SSIM framework:
        在这里插入图片描述

      • The total loss:
        在这里插入图片描述

    3、模型训练

    • 收集了25个公开的曝光堆栈HDR database-可能需要翻墙
    • 还策划了50个具有不同场景特征的曝光堆栈。这些图像是用标准的相机设置和三脚架拍摄的。
    • 2 LDR images (±2 EV)
    • 大小:1200×800
    • 室内和室外场景
    • 30000 patches of size 64×64 were cropped for training
    • learning rate: 1 0 − 4 10^{-4} 104
    • 100 epochs

    4、测试

    • 模型测试:standard cross-validation

    • 融合策略:

      • 亮度通道(Y):trained CNN

      • 色度通道( C b f u s e d Cb_{fused} Cbfused and C r f u s e d Cr_{fused} Crfused):weighted sum ( τ \tau τ=128)
        在这里插入图片描述

    • 偏差计算:

      • 亮度通道:MEF SSIM损失公式用于计算两个灰度(Y)图像之间的分数
    • 获得融合图像:converting { Y f u s e d , C b f u s e d , C r f u s e d Y_{fused}, Cb_{fused}, Cr_{fused} Yfused,Cbfused,Crfused} channels into RGB image

    Experiments and Results

    **数据集:**选择标准图像序列以覆盖不同的图像特征,包括室内和室外、白天和夜间、自然和人工照明、线性和非线性曝光

    **对比的MEF算法(7种):**Mertens09、Li13、Li12、Ma15、Raman11、Shen11、Guo17

    **评估指标:**MEF SSIM

    1、DeepFuse - Baseline

    实验方法:

    • DF-Baseline:使用其他MEF方法生成的融合图像作为ground truth,训练CNN,损失函数分别在 l 1 , l 2 , S S I M l_1,l_2,SSIM l1,l2,SSIM上进行测试

      • 当CNN使用 l 2 l_2 l2损失函数进行训练时,融合图像出现模糊。
      • l 1 l_1 l1损失的结果比 l 2 l_2 l2损失的结果更清晰,但边缘有光晕效应
      • S S I M SSIM SSIM损失函数的结果不仅清晰而且无伪影【最佳选择】
    • DF-UnSupervised:本文提出方法

    实验结果:

    • DeepFuse无监督基线法的优越性能

    • DF-Baseline方法由于使用其他方法的融合图像作为ground truth,性能受到这些融合图像的限制,因此表现一般
      在这里插入图片描述

    2、Comparison with State-of-the-art

    Mertens:

    • 一种简单有效的基于加权的多分辨率图像融合技术

    • 不足:

      • 无法在整个图像中保持一致的亮度

        • 不能保留曝光不足图像的完整图像细节
          在这里插入图片描述

    Li:

    • 出现非均匀亮度伪影

    Shen:

    • 对比度损失和非均匀亮度失真

      云区存在亮度变化。与其他区域相比,气球之间的云区域显得更暗。

      在这里插入图片描述

    Ma:

    • 一种基于patch的融合算法,该算法根据patch强度从输入图像中融合面片。使用每个patch上的幂加权函数计算贴片强度。
    • 不足:这种加权方法会沿边缘引入光晕效应

    **Raman: **

    • 颜色畸变和对比度损失
      在这里插入图片描述

    本文方法:

    • 能够忠实地重现输入对中的所有特征

    • 融合结果没有伪像,例如较暗的区域和不匹配的颜色

    • 保留更精细的图像细节以及更高的对比度和鲜艳的颜色

    • 执行速度比Mertens的方法快3-4×
      在这里插入图片描述

    • DeepFuse可以通过在合并层之前添加其他流来轻松扩展到更多输入图像。

      对于3和4个图像的序列:

      sequencesDFMertens et al.
      30.9870.979
      40.9720.978

    3、 Application to Multi-Focus Fusion

    • CNN具有一定的通用性,可以拓展到其他图像融合任务上,如:多焦点图像融合
      在这里插入图片描述

    Conclusion and Future work

    • 本文提出了一种有效的融合多种曝光水平图像对的方法,能够输出无伪影以及感知良好的融合结果
    • DeepFuse是第一个无监督的深度学习方法应用到静态MEF
    • 方法从每张输入图像中提取普通的低水平特征,然后融合层对这些特征进行融合生成一张融合特征图,最后融合特征经过重建层得到最终的融合图像。
    • 本文在一个具有多种设置的庞大的多曝光堆栈进行模型训练和测试
    • 本文的模型对于各种输入设置无效调整参数
    • 在定量和定性评估中相比最先进的MEF算法取得更好的效果
    展开全文
  • Dual Illumination Estimation for Robust Exposure Correction [pdf] Abstract Exposure correction is one of the fundamental tasks in image processing and computational photography. While various...

    Dual Illumination Estimation for Robust Exposure Correction

    [pdf]

    目录

    Abstract

    1. Introduction

    3. Our Approach

    3.1. Dual illumination estimation Background.

    3.2. Multi-exposure image fusion


    Abstract

    Exposure correction is one of the fundamental tasks in image processing and computational photography. While various methods have been proposed, they either fail to produce visually pleasing results, or only work well for limited types of image (e.g., underexposed images). In this paper, we present a novel automatic exposure correction method, which is able to robustly produce high-quality results for images of various exposure conditions (e.g., underexposed, overexposed, and partially under- and over-exposed). At the core of our approach is the proposed dual illumination estimation, where we separately cast the underand over-exposure correction as trivial illumination estimation of the input image and the inverted input image. By performing dual illumination estimation, we obtain two intermediate exposure correction results for the input image, with one fixes the underexposed regions and the other one restores the overexposed regions. A multi-exposure image fusion technique is then employed to adaptively blend the visually best exposed parts in the two intermediate exposure correction images and the input image into a globally well-exposed image. Experiments on a number of challenging images demonstrate the effectiveness of the proposed approach and its superiority over the state-of-the-art methods and popular automatic exposure correction tools.

    曝光校正是图像处理和计算摄影的基本任务之一。虽然已经提出了各种方法,但它们要么不能产生视觉上令人舒适的结果,要么只能在有限类型的图像 (例如,曝光不足的图像) 中工作。

    本文提出了一种新的自动曝光校正方法,该方法能够稳健地对各种曝光条件 (如曝光不足、曝光过度、部分曝光不足和曝光过度) 的图像产生高质量的结果。

    本文方法的核心是提出的双光照估计,其中,分别将曝光不足和过曝光校正作为输入图像和倒置输入图像的普通光照估计。通过对输入图像进行双光照估计,得到两种中间曝光校正结果,一种是对欠曝光区域进行校正,另一种是对过曝光区域进行恢复。然后利用多次曝光图像融合技术,将两幅中间曝光校正图像和输入图像中视觉上最优曝光部分自适应融合成全局良好曝光的图像。

    在一些具有挑战性的图像上进行的实验证明了所提方法的有效性,以及它相对于最先进的方法和流行的自动曝光校正工具的优越性。

    1. Introduction

    研究内容的现实需求:总结来说,虽然相机很牛,但环境不好和拍摄者能力有限,拍出的图像还是有问题;纵使有很牛的处理软件,但非专业者能力有限和软件并不很准。

    随着内置相机的移动设备和廉价的数码相机的普及,人们对拍照越来越感兴趣,在社交网络上分享照片已经成为一种时尚的生活方式。尽管现代相机配备了许多复杂的技术,且一般易于控制和使用,但在复杂的光照条件下 (例如,低光和背光) ,非专业摄影师拍摄曝光良好的照片仍然是一个挑战。因此,不可避免地会产生曝光不良的照片,如图 1 所示。由于细节不清晰,对比度较弱,色彩暗淡,这些照片通常看起来不舒服,无法捕捉到用户想要的效果,这就增加了对有效曝光校正技术的需求。

    由于其固有的非线性和主观性,曝光校正是一项具有挑战性的任务。事实上,现有的图像编辑软件 (如 Photoshop、GIMP 和 Lightroom) 为用户提供了各种工具来交互式地调整照片的色调和曝光,但对于非专业人士来说仍然很难,因为这些工具基本上需要一个繁琐的过程来平衡多个控制 (如亮度、对比度、颜色等)。尽管 Lightroom 中的 “自动 Tone” 功能和 Photoshop 中的 “自动 Level” 功能允许只需单击一次即可自动曝光校正,但它们可能并不总是对输入图像应用正确的调整,使它们无法产生令人满意的结果。图 2 显示了这些工具处理的示例图像。

    Figure 2: An overexposed image processed by various exposure correction tools. (b) and (c) are results generated by Auto-Level in Photoshop and Auto-Tone in Lightroom, while result (d) is produced by a Photoshop expert through interactive adjustment. The time cost for generating each result is also shown for evaluating the ease of use and algorithm efficiency. 

    现有方法的不足:

    研究人员还开发了各种曝光校正方法。然而,它们大多仅用于校正欠曝光或过度曝光,因此适用性有限。也有一些方法适用于任意曝光条件下的图像。早期的方法如直方图均衡化及其改进方法,通过拉伸强度直方图的动态范围来工作,但往往产生不真实的结果。随后的一些方法依赖于 S 形 tone mapping 射曲线或小波来工作,而最近的方法在数据集上训练 tone 调整模型来进行曝光校正。然而,它们不能很好地处理过度曝光的图像,并可能导致不自然的结果;参见图 11。

    研究方法:

    本文提出了一种新的曝光校正方法,它是建立在观察到曝光不足和过曝光校正可以共同表述为一个简单的输入图像和倒置输入图像的光照估计问题的基础上的。虽然以前的方法已经证明了光照估计在校正曝光不足照片中的有效性,但它们几乎没有探索其在处理过度曝光方面的潜力。与他们不同的是,本文发现过曝光校正也可以通过反转输入图像来表示一个光照估计问题,因为最初的过曝光区域会表现为欠曝光,这允许通过校正反转输入图像中的欠曝光区域来固定输入图像中的过曝光区域。因此,本文引入了双照度估计,分别预测输入图像的正向照度和反向照度。然后从估计的正向和反向光照恢复输入图像的两幅中间曝光校正图像,其中一幅修复曝光不足区域,另一幅恢复曝光过的区域。然后,对中间曝光校正图像和输入图像进行有效的多曝光图像融合,将三幅图像中局部最佳曝光部分无缝融合成全局良好曝光的图像。

    贡献:

    本文的贡献是一种简单而有效的曝光校正方法建立在一种新的双光照估计之上。为了证明该方法的有效性,在一些具有挑战性的图像上进行了评估,并通过用户研究将其与最先进的方法和流行的曝光校正工具进行了比较。实验表明,本文的方法产生的结果更受受试者的青睐,并且本文的方法可以有效地处理之前具有挑战性的图像 (如曝光不足和过度区域的图像)。此外,本文的方法是全自动的,可以以接近交互的速度运行。

    3. Our Approach

    Figure 3: Overview of the proposed exposure correction algorithm. Given an input image, the dual illumination estimation is first performed to obtain the forward and reverse illuminations, from which we then recover the intermediate under- and over-exposure corrected images of the input. Next, an effective multi-exposure image fusion is applied to seamlessly blend visually best exposed parts in the two intermediate exposure correction images as well as the input image into the final globally well-exposed image. 

    图 3 为曝光校正算法的系统概述。给出一幅输入图像,首先对其进行双照度估计,得到正、逆照度,从中恢复出曝光过低和曝光过低的中间校正图像。然后,将两幅中间曝光校正图像与输入图像融合为期望图像,将三幅图像中最佳曝光部分无缝融合到期望图像中。

    3.1. Dual illumination estimation Background.

      Background  

    双重光照估计的基础是基于 Retinex 的图像增强中的假设,假设图像 I (归一化为 [0,1]) 可以被表征为所需增强图像 I' 和单通道光照映射 L 的像素级乘积:

    I = I' × L, (1)

    其中 × 表示像素乘法。在这个假设下,图像增强可以简化为一个光照估计问题,因为只要光照映射 L 已知,就可以恢复所需的图像 I'。然而,基于 Retinex 的方法对过度曝光的图像效果不好。原因是图像的衰减曝光要求公式 (1) 中的光照映射 L 超过正常的色域 (即 L > 1),因为生成的图像 I' 是由 I × L^{−1} 恢复的。图 4 显示了一个例子,其中基于 Retinex 的增强方法进一步增加了过度曝光的输入图像的曝光,产生了图 4(b) 和 (c) 中视觉不佳的图像。

    Figure 4: Limitation of existing Retinex-based image enhancement methods (b) and (c) in correcting an overexposed image (a). 

      Key Observation  

    与之前基于 Retinex 增强的方法不同,本文观察到,过度曝光校正也可以通过反转输入图像来表述为一个光照估计问题,因为最初的过度曝光区域会在反转图像中表现为曝光不足,允许通过校正反向输入图像中相应的欠曝光区域来固定输入图像中的过曝光区域

    具体地,为了校正输入图像 I 中的过曝光区域,首先获得其倒像 I_{inv} = 1 − I,并估计相应的光照映射 L_{inv}。然后计算曝光不足校正后的图像 I'_{inv},即 I'_{inv} = I_{inv} × L^{−1}_{inv},然后恢复所需的曝光过度校正后的图像 I' = 1 − I'_{inv}。注意,反向输入的图像通常是不真实的图像,但恢复的曝光过度校正后的图像是真实的。图 5 验证了这个的观察,通过对倒置的输入图像进行光照估计成功地校正了过度曝光的图像。

    Figure 5: Validation of our observation. (a) Input overexposed image I. (b) Inverted input image Iinv. (c) and (d) are illumination Linv and underexposure corrected image I'_{inv} of the inverted image I_{inv}. (e) Overexposure corrected image I' = 1−I'_{inv} of the input image (a). 

    值得注意的是,在以前的增强方法中已经使用了倒置图像 [DWP∗11,LWWG15]。本文使用的倒像方法与这些方法有两个不同之处。首先,他们专注于增强弱光图像/视频,而本文的目标是纠正过度曝光的照片。第二,他们观察到倒置的弱光图像看起来像雾蒙蒙的图像,因此使用去雾算法来产生最终的结果。相反,本文观察到,过度曝光的图像在倒置时曝光不足,可以通过光照估计间接纠正。

    [DWP∗11Fast efficient algorithm for enhancement of low lighting video. In ICME (2011).

    [LWWG15] A low-light image enhancement method for both denoising and contrast enlarging. In ICIP (2015).

    基于这一观察,本文设计了双光照估计,第一遍对输入图像进行正向光照估计,目的是校正欠曝光区域,而另一遍对输入图像进行反向光照估计,以获得反向光照并校正过曝光区域。这种设计的原因是输入图像可能存在部分过曝和过曝的情况,因此需要进行二次光照估计来校正不同曝光条件下的区域。正向和反向照明是在相同的照明估计框架中分别估计的。

      Illumination Estimation Framework  

    为了估计给定图像 I 的照度,首先,取最大 RGB 颜色通道作为每个像素点的照度值,得到初始照度 L',表示为

    其中 I^c_p 表示像素 p 处的颜色通道 c。

    本文使用最大颜色通道作为初始光照的原因是,根据 I' = I × L'^{−1},较小的光照可能有将恢复图像 I' 的颜色通道发送出色域的风险。虽然最初的光照图粗略地描述了整体的光照分布,但它通常包含更丰富的细节和纹理,而不是由光照不连续导致的,这使得从它恢复的结果不现实;如图 6 (b) 和 (c) 所示。因此,本文提出通过保留突出的结构,去除冗余的纹理细节,从 l' 估计出一个精细的光照图 L。

    为此,定义如下目标函数来获得所需的照明图 L:

    其中,∂x 和 ∂y 分别是水平方向和垂直方向上的空间导数。w_{x,p} 和 w_{y,p} 是空间变化的平滑权值。第一项 (L_p−L'_p)^2 强制 L 与初始光照映射 l' 相似,而第二项旨在通过最小化偏导数来去除 l' 中多余的纹理细节。λ 是平衡这两项的权重。

    直观地说,Eq. 3 中的目标函数在形状上与 WLS 平滑的目标函数相似。然而,本文的平滑权值的定义是不同的。具体来说,x 方向平滑度权值 w_{x,p} 表示为:

    其中 T_{x,p} 受到相对总变异 (RTV) [XYXJ12] 的启发,定义为

     

    其中 Ω_p 表示以式 (4) 和 (5) 中的像素 p. Ω 为中心的 15 × 15 平方窗口,ε 固定为1e-3。G_σ(p,q)计算像素 p 和 q 之间基于空间亲和性的高斯权重,σ = 3 为标准差。在形式上,G_σ(p,q) 定义为

    其中函数 D(p,q) 计算像素 p 和 q 之间的空间欧几里德距离。由于 y 方向平滑权值 U_{y,p} 的定义类似,这里不给出它的定义。

    Eq.3 中目标函数的解可以有效地得到;参考 [LLW04, LFUS06, FFLS08]。

    [XYXJ12] Structure extraction from texture via relative total variation. ACM Transactions on Graphics 31, 6 (2012)

    [LLW04] Colorization using optimization. ACM transactions on graphics 23, 3 (2004)

    [LFUS06] Interactive local adjustment of tonal values. ACM Transactions on Graphics 25, 3 (2006)

    [FFLS08] Edge-preserving decompositions for multi-scale tone and detail manipulation. ACM Transactions on Graphics 27, 3 (2008)

    与 [FZH∗16, GLL17] 类似,为了恢复更亮的结果,本文交替地对估计的照明 L 进行 Gamma 调整,即 L = L^γ,并通过 I' = I∗(L^γ)^{−1} 恢复曝光校正结果。本文 γ 为 0.6。图 6 显示了本文的光照估计在校正曝光不足图像中的有效性。可以看出,通过优化 Eq. 3 中的目标函数,本文得到了纹理细节较少的分段平滑光照,由此恢复了视觉上令人舒服的曝光不足校正结果。

    Figure 6: Illumination estimation. (a) Input image. (b) Initial illumination. (c) Result recovered from the initial illumination (b). (d) Our refined illumination. (e) Result recovered from our refined illumination (d). Note the forward illumination is estimated here, since the input image is obviously underexposed. Source image from Bychkovsky et al. [BPCD11].

    [FZH∗16] A weighted variational model for simultaneous reflectance and illumination estimation. In CVPR (2016).

    [GLL17] Low-light image enhancement via illumination map estimation. IEEE Transactions on Image Processing 26, 2 (2017). 

    [BPCD11] Learning photographic global tonal adjustment with a database of input/output image pairs. In CVPR (2011)

    图 7 对比了本文的光照估计和之前的边缘保持图像平滑方法。为了进行公平的比较,本文使用作者提供的具有良好调优参数的实现,基于相同的初始照明来生成它们的照明。此外,在恢复曝光校正结果时,对每种方法产生的光照进行伽玛校正。

    从图中可以看出,本文的光照在保留显著光照结构的同时,较好地去除了初始光照中多余的纹理细节,恢复出对比度更明显、色彩更鲜艳的视觉效果。

    需要注意的是,虽然图 7 中是对正向光照进行了估计,但是对于反向光照的估计,上面的结论同样成立,因为两者都是建立在相同的光照估计算法上的。

    Figure 7: Comparison with edge-preserving smoothing methods on illumination estimation. (a) and (e) are the input image and the initial illumination. (b) and (c) are smoothed illuminations produced by the WLS smoothing [FFLS08] and the RTV method [XYXJ12]. (f) and (g) are results recovered from the illuminations (b) and (c), respectively. (d) and (h) are our estimated illumination and the corresponding exposure correction results. 

    3.2. Multi-exposure image fusion

    如上所述,通过执行所提出的双光照估计,可以得到一个输入图像的两个中间曝光校正版本,一个校正未曝光区域,另一个恢复过曝光区域。直观地说,要生成全局曝光良好的图像,关键是在两幅中间曝光校正图像中,将局部曝光最好的部分无缝融合。考虑到输入图像中可能存在正常曝光区域,本文又采用输入图像,对三幅图像进行多次曝光图像融合,得到最终的曝光校正结果。

    设 I'_f 和 I'_r 表示输入图像 I 的中间曝光过低和过曝光校正后的图像,然后利用曝光融合技术将图像序列 {I'_f, I'_r, I} 融合为全局曝光良好的图像 I'。具体来说,首先通过以下方法计算出序列中每个图像的视觉质量图:

    其中 k 表示图像序列中的第 k 幅图像。C, S 和 E 是对比度,饱和度和舒适度的定量测量;详情见[MKVR09]。β_C、β_S 和 β_E 是控制各测量值影响的参数,默认为 1。

    具有较高视觉质量值的像素更有可能曝光得更好。然后将这三个视觉质量地图归一化,使它们在每个像素点 p 处的总和为 1。

    [MKVR09] Exposure fusion: A simple and practical alternative to high dynamic range photography. Computer Graphics Forum 28, 1 (2009).

    接下来,利用 Burt 和 Adelson [BA83] 提出的多分辨率图像融合技术,在预先计算的视觉质量图的指导下,对序列中的图像进行无缝融合。如图 8 (d) 所示,融合后的图像自适应保持了多次曝光图像序列 (图 8(a)-(c)) 中的视觉最佳部分,由于亮度提高、细节清晰、对比度明显、色彩鲜艳,与输入图像相比具有更好的视觉效果。

    [BA83] The laplacian pyramid as a compact image code. IEEE Transactions on Communications 31, 4 (1983).

    然而,注意到在融合后的图像序列中,局部最佳曝光区域,如人脸和天空,出现了明显的质量退化。这是因为在融合过程中,序列中相同的视觉质量较低的区域削弱了这些区域的影响。因此,本文不对视觉质量图进行归一化处理,而是对视觉质量图进行修改,只保留图像序列中每个像素处的最大值,表示为

    使用修改后的视觉质量图,得到了人脸和云细节更清晰,对比度更好,颜色更鲜艳的改进结果,如图 8(e) 所示。

    Figure 8: Multi-exposure image fusion. (a) Input image. (b) and (c) are under- and over-exposure corrected images recovered from the forward and reverse illuminations, respectively. (d) and (e) are fused images produced by the original and modified visual quality maps. 

    展开全文
  • Exposure 7是专为摄影艺术设计的图像编辑器。新的 Exposure X7 结合了专业级的照片调整、庞大的华丽照片库和令人愉悦的高效设计。 Exposure X7 更改了裁剪图像的方式。统一了裁剪和变换面板,以提供一种更直接的照片...
  • Exposure Fusion

    千次阅读 2018-12-20 11:05:42
    We propose a technique for fusing a bracketed exposure sequence into a high quality image, without converting to HDR first. Skipping the physically-based HDR assembly step simplifies the acq...
  • Robust Multi-Exposure Image Fusion: A Structural Patch Decomposition Approach
  • Dual Illumination Estimation for Robust Exposure Correction 阅读札记   论文发表于2019年的CGF。 Abstract   本文方法主要解决图像错误曝光的问题,可处理欠曝光、过曝光、同时有欠曝光和过曝光的图像。本文...
  • 运行yarn add exposure-keys npm install exposure-keys yarn add exposure-keys或npm install exposure-keys 。 用法 import * as fs from 'fs' ; import { loadZip , loadKeys , loadSignature } from 'exposure-...
  • Alien Skin Exposure X7下载文章末尾! 一、图片调色怎么调 图片调色大约有以下步骤: 1.明确调色思路 好照片的标准是主体突出、画面简洁、色彩正常,光影及颜色统一,我们对照片的调节就需要围绕这几个标准展开。拿...

空空如也

空空如也

1 2 3 4 5 ... 20
收藏数 33,750
精华内容 13,500
关键字:

exposure

友情链接: 微信公众号.rar