精华内容
下载资源
问答
  • 文章目录概主要内容符号说明ErrorClassification-calibrated surrogate loss引理2.1定理3.1定理3.2由此导出的TRADES算法实验... Theoretically Principled Trade-off between Robustness and Accuracy[J]. arXiv: ...

    Zhang H, Yu Y, Jiao J, et al. Theoretically Principled Trade-off between Robustness and Accuracy[J]. arXiv: Learning, 2019.

    @article{zhang2019theoretically,
    title={Theoretically Principled Trade-off between Robustness and Accuracy},
    author={Zhang, Hongyang and Yu, Yaodong and Jiao, Jiantao and Xing, Eric P and Ghaoui, Laurent El and Jordan, Michael I},
    journal={arXiv: Learning},
    year={2019}}

    从二分类问题入手, 拆分 R r o b \mathcal{R}_{rob} Rrob R n a t , R b d y \mathcal{R}_{nat},\mathcal{R}_{bdy} Rnat,Rbdy, 通过 R r o b − R n a t ∗ \mathcal{R}_{rob}-\mathcal{R}_{nat}^* RrobRnat的上界建立损失函数,并将这种思想推广到一般的多分类问题.

    主要内容

    符号说明

    X , Y X, Y X,Y: 随机变量;
    x ∈ X , y x\in \mathcal{X}, y xX,y: 样本, 对应的标签( 1 , − 1 1, -1 1,1);
    f f f: 分类器(如神经网络);
    B ( x , ϵ ) \mathbb{B}(x, \epsilon) B(x,ϵ): { x ′ ∈ X : ∥ x ′ − x ∥ ≤ ϵ } \{x'\in \mathcal{X}:\|x'-x\| \le \epsilon\} {xX:xxϵ};
    B ( D B ( f ) , ϵ ) \mathbb{B}(DB(f),\epsilon) B(DB(f),ϵ): { x ∈ X : ∃ x ′ ∈ B ( x , ϵ ) , s . t .   f ( x ) f ( x ′ ) ≤ 0 } \{x \in \mathcal{X}: \exist x'\in \mathbb{B}(x,\epsilon), \mathrm{s.t.} \: f(x)f(x')\le0\} {xX:xB(x,ϵ),s.t.f(x)f(x)0} ;
    ψ ∗ ( u ) \psi^*(u) ψ(u): sup ⁡ u { u T v − ψ ( u ) } \sup_u\{u^Tv-\psi(u)\} supu{uTvψ(u)}, 共轭函数;
    ϕ \phi ϕ: surrogate loss.

    Error

    R r o b ( f ) : = E ( X , Y ) ∼ D 1 { ∃ X ′ ∈ B ( X , ϵ ) , s . t .   f ( X ′ ) Y ≤ 0 } , (e.1) \tag{e.1} \mathcal{R}_{rob}(f):= \mathbb{E}_{(X,Y)\sim \mathcal{D}}\mathbf{1}\{\exist X' \in \mathbb{B}(X, \epsilon), \mathrm{s.t.} \: f(X')Y \le 0\}, Rrob(f):=E(X,Y)D1{XB(X,ϵ),s.t.f(X)Y0},(e.1)
    其中 1 ( ⋅ ) \mathbf{1}(\cdot) 1()表示指示函数, 显然 R r o b ( f ) \mathcal{R}_{rob}(f) Rrob(f)是关于分类器 f f f存在adversarial samples 的样本的点的测度.

    R n a t ( f ) : = E ( X , Y ) ∼ D 1 { f ( X ) Y ≤ 0 } , (e.2) \tag{e.2} \mathcal{R}_{nat}(f) :=\mathbb{E}_{(X,Y)\sim \mathcal{D}}\mathbf{1}\{f(X)Y \le 0\}, Rnat(f):=E(X,Y)D1{f(X)Y0},(e.2)
    显然 R n a t ( f ) \mathcal{R}_{nat}(f) Rnat(f) f f f正确分类真实样本的概率, 并且 R r o b ≥ R n a t \mathcal{R}_{rob} \ge \mathcal{R}_{nat} RrobRnat.

    R b d y ( f ) : = E ( X , Y ) ∼ D 1 { X ∈ B ( D B ( f ) , ϵ ) ,   f ( X ) Y > 0 } , (e.3) \tag{e.3} \mathcal{R}_{bdy}(f) :=\mathbb{E}_{(X,Y)\sim \mathcal{D}}\mathbf{1}\{X \in \mathbb{B}(DB(f), \epsilon), \:f(X)Y > 0\}, Rbdy(f):=E(X,Y)D1{XB(DB(f),ϵ),f(X)Y>0},(e.3)
    显然
    R r o b − R n a t = R b d y . (1) \tag{1} \mathcal{R}_{rob}-\mathcal{R}_{nat}=\mathcal{R}_{bdy}. RrobRnat=Rbdy.(1)

    因为想要最优化 0 − 1 0-1 01loss是很困难的, 我们往往用替代的loss ϕ \phi ϕ, 定义:
    R ϕ ( f ) : = E ( X , Y ) ∼ D ϕ ( f ( X ) Y ) , R ϕ ∗ ( f ) : = min ⁡ f R ϕ ( f ) . \mathcal{R}_{\phi}(f):= \mathbb{E}_{(X, Y) \sim \mathcal{D}} \phi(f(X)Y), \\ \mathcal{R}^*_{\phi}(f):= \min_f \mathcal{R}_{\phi}(f). Rϕ(f):=E(X,Y)Dϕ(f(X)Y),Rϕ(f):=fminRϕ(f).

    Classification-calibrated surrogate loss

    这部分很重要, 但是篇幅很少, 我看懂, 等回看了引用的论文再讨论.
    在这里插入图片描述

    在这里插入图片描述

    引理2.1

    在这里插入图片描述

    定理3.1

    在假设1的条件下 ϕ ( 0 ) ≥ 1 \phi(0)\ge1 ϕ(0)1, 任意的可测函数 f : X → R f:\mathcal{X} \rightarrow \mathbb{R} f:XR, 任意的于 X × { ± 1 } \mathcal{X}\times \{\pm 1\} X×{±1}上的概率分布, 任意的 λ > 0 \lambda > 0 λ>0, 有
    R r o b ( f ) − R n a t ∗ ≤ ψ − 1 ( R ϕ ( f ) − R ϕ ∗ ) + P r [ X ∈ B ( D B ( f ) , ϵ ) , f ( X ) Y > 0 ] ≤ ψ − 1 ( R ϕ ( f ) − R ϕ ∗ ) + E max ⁡ X ′ ∈ B ( X , ϵ ) ϕ ( f ( X ′ ) f ( X ) / λ ) . \begin{array}{ll} & \mathcal{R}_{rob}(f) - \mathcal{R}_{nat}^* \\ \le & \psi^{-1}(\mathcal{R}_{\phi}(f)-\mathcal{R}_{\phi}^*) + \mathbf{Pr}[X \in \mathbb{B}(DB(f), \epsilon), f(X)Y >0] \\ \le & \psi^{-1}(\mathcal{R}_{\phi}(f)-\mathcal{R}_{\phi}^*) + \mathbb{E} \quad \max _{X' \in \mathbb{B}(X, \epsilon)} \phi(f(X')f(X)/\lambda). \\ \end{array} Rrob(f)Rnatψ1(Rϕ(f)Rϕ)+Pr[XB(DB(f),ϵ),f(X)Y>0]ψ1(Rϕ(f)Rϕ)+EmaxXB(X,ϵ)ϕ(f(X)f(X)/λ).
    最后一个不等式, 我知道是因为 ϕ ( f ( X ′ ) f ( X ) / λ ) ≥ 1. \phi(f(X')f(X)/\lambda) \ge1. ϕ(f(X)f(X)/λ)1.

    定理3.2

    在这里插入图片描述

    结合定理 3.1 , 3.2 3.1, 3.2 3.1,3.2可知, 这个界是紧的.

    由此导出的TRADES算法

    二分类问题, 最优化上界, 即:
    在这里插入图片描述

    扩展到多分类问题, 只需:
    在这里插入图片描述

    算法如下:
    在这里插入图片描述

    实验概述

    5.1: 衡量该算法下, 理论上界的大小差距;
    5.2: MNIST, CIFAR10 上衡量 λ \lambda λ的作用, λ \lambda λ越大 R n a t \mathcal{R}_{nat} Rnat越小, R r o b \mathcal{R}_{rob} Rrob越大, CIFAR10上反映比较明显;
    5.3: 在不同adversarial attacks 下不同算法的比较;
    5.4: NIPS 2018 Adversarial Vision Challenge.

    代码

    
    
    
    
    
    
    
    
    
    import torch
    import torch.nn as nn
    
    
    
    
    
    def quireone(func): #a decorator, for easy to define optimizer
        def wrapper1(*args, **kwargs):
            def wrapper2(arg):
                result = func(arg, *args, **kwargs)
                return result
            wrapper2.__doc__ = func.__doc__
            wrapper2.__name__ = func.__name__
            return wrapper2
        return wrapper1
    
    
    class AdvTrain:
    
        def __init__(self, eta, k, lam,
                     net, lr = 0.01, **kwargs):
            """
            :param eta: step size for adversarial attacks
            :param lr: learning rate
            :param k: number of iterations K in inner optimization
            :param lam: lambda
            :param net: network
            :param kwargs: other configs for optim
            """
            kwargs.update({'lr':lr})
            self.net = net
            self.criterion = nn.CrossEntropyLoss()
            self.opti = self.optim(self.net.parameters(), **kwargs)
            self.eta = eta
            self.k = k
            self.lam = lam
    
        @quireone
        def optim(self, parameters, **kwargs):
            """
            quireone is decorator defined below
            :param parameters: net.parameteres()
            :param kwargs: other configs
            :return:
            """
            return torch.optim.SGD(parameters, **kwargs)
    
    
        def normal_perturb(self, x, sigma=1.):
    
            return x + sigma * torch.randn_like(x)
    
        @staticmethod
        def calc_jacobian(loss, inp):
            jacobian = torch.autograd.grad(loss, inp, retain_graph=True)[0]
            return jacobian
    
        @staticmethod
        def sgn(matrix):
            return torch.sign(matrix)
    
        def pgd(self, inp, y, perturb):
            boundary_low = inp - perturb
            boundary_up = inp + perturb
            inp.requires_grad_(True)
            out = self.net(inp)
            loss = self.criterion(out, y)
            delta = self.sgn(self.calc_jacobian(loss, inp)) * self.eta
            inp_new = inp.data
            for i in range(self.k):
                inp_new = torch.clamp(
                    inp_new + delta,
                    boundary_low,
                    boundary_up
                )
            return inp_new
    
        def ipgd(self, inps, ys, perturb):
            N = len(inps)
            adversarial_samples = []
            for i in range(N):
                inp_new = self.pgd(
                    inps[[i]], ys[[i]],
                    perturb
                )
                adversarial_samples.append(inp_new)
    
            return torch.cat(adversarial_samples)
    
        def train(self, trainloader, epoches=50, perturb=1, normal=1):
    
            for epoch in range(epoches):
                running_loss = 0.
                for i, data in enumerate(trainloader, 1):
                    inps, labels = data
    
                    adv_inps = self.ipgd(self.normal_perturb(inps, normal),
                                         labels, perturb)
    
                    out1 = self.net(inps)
                    out2 = self.net(adv_inps)
    
                    loss1 = self.criterion(out1, labels)
                    loss2 = self.criterion(out2, labels)
    
                    loss = loss1 + loss2
    
                    self.opti.zero_grad()
                    loss.backward()
                    self.opti.step()
                    
                    running_loss += loss.item()
    
                    if i % 10 is 0:
                        strings = "epoch {0:<3} part {1:<5} loss: {2:<.7f}\n".format(
                            epoch, i, running_loss
                        )
                        print(strings)
                        running_loss = 0.
    
    
    
    
    
    
    
    
    
    
    
    
    
    
    
    
    
    
    
    
    
    
    展开全文
  • 论文阅读之Joint Segment-Level and Pixel-Wise Losses for Deep Learning Based Retinal Vessel Segmentation Joint Segment-Level and Pixel-Wise Losses for Deep Learning Based Retinal Vessel Segmentation ...

    论文阅读之Joint Segment-Level and Pixel-Wise Losses for Deep Learning Based Retinal Vessel Segmentation
    Paper:Joint Segment-Level and Pixel-Wise Losses for Deep Learning Based Retinal Vessel Segmentation
    Code:Joint Segment-Level and Pixel-Wise Losses for Deep Learning Based Retinal Vessel Segmentation
    Yan Z , Yang X , Cheng K T . Joint Segment-Level and Pixel-Wise Losses for Deep Learning Based Retinal Vessel Segmentation[J]. IEEE Transactions on Biomedical Engineering, 2018, 65(9):1912-1923.

    Abstract

    深度学习用于血管分割通常基于像素级别的损失函数,对所有血管像素像素赋予同样的权重,但由于在粗细血管所占比例悬殊,像素级别的损失函数导致网络对毛细血管的特征学习不佳。因此本文提出了新的分割损失函数,训练阶段更加强调毛细血管的特征。通过联合考虑分割级别和像素级别的权重,较粗血管和毛细血管的比重较为平衡,这样在不增加网络复杂度的前提下可以学到更加细微的特征
实验结果显示本文这种联合损失函数在独立训练和交叉训练中都达到了state-of_the_art的性能。最终得出结论使用这种联合损失函数可以学习更具区分度的特征,而且分割级别的loss对深浅层次的网络均有提升,而且这种方法可以应用于其他任务,在不对网络框架做较大改变的前提进一步替身网络性能。

    Section I Introduction

    目前视网膜血管分割主要分为两类:监督学习和无监督学习。
    无监督学习算法不依赖于人工特征,主要流派有基于滤波器和基于模型的方法。
    监督学习算法又有两种类别:浅层次学习和深层次学习。一般浅层学习依赖于手工特征。深度学习方法基于像素级别的损失函数进行训练的。就是根据预测的概率图谱与对应的GT进行像素对像素的计算,但由于粗细血管所占比例不同,需要着重强调毛细血管的特征。方法之一是搭建更深层次的网络但这增加了网络的复杂度。本文则提出了一种分割级别的损失函数,用来衡量每一条血管的宽度执行,分别给每一点像素分配权重,对于毛细血管的实配施加更重的乘法,通过两种层次损失函数的结合,可以进行更有效的血管分割。而且这一损失函数很容易拓展到其他任务中。
    本文主要安排如下:Section II分析pixel-loss损失的性能 Section III详细介绍提出的segment-level loss和joint-loss 深度学习框架;Section IV通过对比实验验证joint-loss框架;Section V总结全文。

    Section II pixel-loss & segment loss

    pixel级别的损失通过对生成的概率图谱和ground truth图谱进行pixel-to-pixel的匹配。每个像素点输出的概率与真实标签用于反向传播和梯度计算。在这一过程中,每个像素点拥有同等的权重,每一点像素的loss值都是单独计算的。
    这一方法存在的问题是分割出血管粗细的不一致性,在较细血管上尤为突出。如Fig1所示,中间为专家标注结果,右侧为分割结果,可以看到分割的粗细和GT不一致这就是由于通过pixel-loss计算导致的
    Fig 1如果我们把<3pixel的血管定义为“毛细血管(thin vessel)”那么将有近77%血管属于“粗血管(thick vessel)”。那么深度学习框架就会对粗血管的适配效果更佳。但如果粗细血管比重分别是45%:55%这样橡塑件的匹配就会更均衡,因此我们提出适于分割层面的segment loss来调整粗细血管这种比例失衡问题。在segment-level层的loss会为毛细血管分配更过的权重。在计算loss值中使用pixel-level和segment-level层的联合损失函数可以学习更好的特征表达用于精细血管分割。

    Section III Methodology

    本节将会讨论如何实现两种损失。在训练阶段设计2分支结构分别实现segment-loss和pixel-loss。测试阶段二分支结构输出的概率图谱会融合到一起用于最后的血管分割。

    Part A Segment-level Loss
    
主要针对血管粗细不一致问题,首先将血管结构通过骨架提取算法从人工标注的结构通过设定的长度阈值不断分割,直至分割成不同的小的血管部分。在每一块分割的小部位会在半径为R的范围内搜索,与设定的阈值进行比对,并最终将改点的pixel判定为vessel or non-vessel区域
随后与GT进行对比,计算出失配比(mismatch ratio)
mismatch ratio定义如下:
    mismatch ratio
    并通过mismatch-ratio建立权重计算公式:
    weight matrixloss
    这样在计算loss值时会根据mismatch-ratio为不同像素分配不同的权重。(个人认为毛细血管一般mismatch-ratio会更大,因此也会被赋予更多权重。训练过程中对毛细血管施加的penalty也就更大。
    Hyper-parameter Selection
    超参数的选择:在segment-level loss的计算过程中涉及两个超参数:划分血管的最大长度(maxLength)以及搜索半径(r).其中maxLength决定了对血管粗细的相对变化范围,最好使得划分区域内血管的粗细粒度相近;而搜索半径则取决于不同手工标注的不一致性,r的取值应使得不同标注产生的骨架图尽可能一致9覆盖性佳)
    Joint-Loss Framework
    联合损失的深度学习框架以UNet为基础模型,但设计了两个分支分别计算segment-loss及pixel-loss.具体结构见下图:
    Joint-Loss Framework based on UNet
    如前文所言,segment-level loss更关注较细血管,常规的pixel-loss更关注粗血管,因此两种损失结合提取更好的特征表达;在测试阶段两种损失是通过pixel-wise multiplication融合在一起并产生最终的分割图谱。
而且由于segment-level loss的计算不依赖于特殊框架,因此这种联合损失方法可以迁移应用到其他框架中。

    Section IV Evaluation

    Part A Experiments
    数据集:分别在DRIVE STARE CHASE_DB1和HRF数据集上进行了验证
DRIVE:包含40张images 训练验证五五开
STARE:共20张图,由于没有明确的训练和测试划分,交叉验证过程中每次选择一张作为test其余19张作为training
CHASE-DB1:包含14位学龄儿童左右眼共28张图片,分辨率为999960.取前20张用于训练,后8张用于测试
HRF:包含45张高分辨率眼底图像(35042336),其中15张健康图像,15张D图像,15张青光眼图像。计算过程中由于计算资源的限制dowmsample=4
    ** Part B Preprocessing**
    预处理操作包括提取绿色通道的灰度图,采取分块方法,每个patch=128*128大小,步长为64,并且把FOV区域外的背景像素给去除掉了。
    同时为了防止过拟合才用了常用的数据扩增手段,包括翻转、旋转、resize以及添加随机噪声。
    训练的patch数目分别是:
    DRIVE(7869)/STARE(8677)/CHASE_DB1(4789/HRF(4717)
    Part C 实验细节
    框架Caffe,衰减学习率为了对比首先训练没有segment-loss的框架,后来采用同样的设定训练了含有segment-level loss的框架
    Part D 评测指标
    Sensitivity,Specificity,Precision,Accuracy,ROC,AUC
    Part E 实验结果
    主要比较使用joint-loss和只使用pixel-wise loss的分割结果。对于联合损失框架还通过多种训练方式(cross-training,mix_training,threshold-free)来测试模型的鲁棒性。并最终在HRF高分辨率数据集上进行了测试
    Results比如Joint-Loss Framework在DRIVE数据集上取得的分割结果分别是:
    Se:0.7653 Sp:0.9818 Acc:0.9542 AUC:0.9752
    Joint-Loss vs Pixel-wise Loss
    从下图3、4列的对比可以看出,joint-loss产生的概率图更加“干净”,因此更容易进行血管与背景像素的区分。
    Fig 5Fig 6
    从Fig6的细节对比,可以看出采用Joint-Loss的训练方法分割出的血管,粗细力度更一致,而pixel-wise的方法一些地方即使是观测者也很难区分是毛细血管还是背景噪声
    Cross-trining and Mix training
    Cross-training:在几个数据集上预训练过的模型用于另一数据集的测试
    Mix-training:在一个数据集上训练后的模型用于另一数据集的测试(更具挑战性)
    Threshold free vessel segmentation
    还进行了阈值分割,就是不借助于人工分割结果,直接根据概率图谱生成最终的分割图片
    High-Resolution Dataset
    还在高分辨率数据集HRF上进行了测试,分割对比如下图所示。可以看出joint-loss的方法更有助于区分血管和背景区域
    Fig 8

    Section V Discussion

    Part A
    视网膜血管分割任务的挑战有:
    (1)损伤区域的血管分割
    (2)低对比度的微血管分割
    (3)存在血管反射的情况???
    Fig 9 Row1展示了明亮视野下损伤对分割的影响,通过阈值的设置可以有效去除;Row2展示了部分没有标注的部分被分割成了毛细血管,但本文的joint-loss可以有效抑制这种误识别;Row 3展示了本文框架的精确分割结果。
    Part B 粗细血管分割对比
    对比方法:将<3pixels的血管认为是thin vessel,为其分配5pixel的搜索半径;对于thick vessel分配10pixel的搜索半径,在搜索范围内进行pixel-to-pixel的matching
    有效解决粗细不均匀问题可有效提升Specificity及Presicion,也就是说会有更少的non-vessel区域被错误分割
    Part C Architecture Independence
    为了验证joint-loss并不依赖与实施框架,我们将UNet替换为另一种简化了的FCN框架,如下图所示,并在DRIVE数据集上进行了验证。
    FCN
    虽然采用的FCN在结构和网络深度上都与UNet大不相同,依旧采用二分支结构体现joint-loss,得到了一致的分割结果:joint-loss分割结果更加清晰,血管粗细不一致的情况得到了改善,更有利于细微血管的精确分割。
    Part D Hyperparameters
    超参数的选择对模型性能的影响
    (1)maxLength:maxLength越小能容忍的血管的粗细相差越小,因此训练处的网络性能也越好;但减小maxLength也会增加计算量。而且根据实验中skeleton分割的结果,maxLength不会显著的影响最终的分割精度。
    (2)搜索半径r:更大的搜索半径会有效提升模型性能,因为非血管的像素会受到两种损失的惩罚(segment-loss, pixel-loss);但增大搜索半径也会带来计算量的增加,实验中也是需要进行trade off。
    Hyperparameters

    Section VI

    本文分析了pixel-wise loss计算的局限性,提出了结合segment-level的联合损失计算,借此来增强训练过程中对粗细血管的重视程度;同时为了将这两种损失融合在一起提出了一种新的3二分支结构的深度学习框架;随后通过一系列对比实验验证了joint-loss的深度学习框架的有效性及鲁棒性,可以迁移运用到其他框架及任务中用来提升网络性能。

    Summary
    本文实验的工作量很大,留下深刻印象是两种loss有效性验证的实验以及architecture independence的实验设计;还发现作者的第二篇论文打算继续阅读,不知道二者工作是否有连续性。
    下一篇预告:
    A Three-Stage Deep Learning Model for Accurate Retinal Vessel Segmentation

    展开全文
  • human population density, and intensified international trade aggravate forest pest outbreaks. Although the Chinese government has complied with internationally recommended practices, some aspects ...

    Historically, China has exhibited spatial differentiation in issues ranging from population
    distribution to ecological or economic development; forest pest-control work exemplifies this
    tendency. In recent times, global warming, man-made monoculture tree-plantations, increasing
    human population density, and intensified international trade aggravate forest pest outbreaks.
    Although the Chinese government has complied with internationally recommended practices, some
    aspects of pest management remain unaddressed due to existing differential regional imbalance in
    forest pest distribution and control capacities. Evidence shows that the high-income provinces in
    the south have taken advantage of economic and technological superiority, resulting in the adoption
    of more efficient pest-control measures. In contrast, the economically underdeveloped provinces of
    the northwest continue to experience a paucity of financial support that has led to serious threats
    of pest damage that almost mirror the demarcations of the Hu Huanyong Line. In this paper, we
    propose the introduction of a Public–Private–Partnership (PPP) model into forest pest control and
    the combination of the national strategies to enact regional prevention measures to break away from
    current spatially differentiated trends in China.

    展开全文
  •   ...http://www.investopedia.com/articles/forex/12/calculating-profits-and-losses-of-forex-trades.asp Currency trading offers a challenging and profitable opportunity for well-e...

     

    This article is from

    http://www.investopedia.com/articles/forex/12/calculating-profits-and-losses-of-forex-trades.asp

    Currency trading offers a challenging and profitable opportunity for well-educated investors. However, it is also a risky market, and traders must always remain alert to their trade positions. The success or failure of a trader is measured in terms of the profits and losses (P&L) on his or her trades. It is important for traders to have a clear understanding of their P&L, because it directly affects the margin balance they have in their trading account. If prices move against you, your margin balance reduces, and you will have less money available for trading.

     


    Realized and Unrealized Profit and Loss
    All your foreign exchange trades will be marked to market in real-time. The mark-to-market calculation shows the unrealized P&L in your trades. The term "unrealized," here, means that the trades are still open and can be closed by you any time. The mark-to-market value is the value at which you can close your trade at that moment. If you have a long position, the mark-to-market calculation typically is the price at which you can sell. In case of a short position, it is the price at which you can buy to close the position.

    Until a position is closed, the P&L will remain unrealized. The profit or loss is realized (realized P&L) when you close out a trade position. When you close a position, the profit or loss is realized. In case of a profit, the margin balance is increased, and in case of a loss, it is decreased.

    The total margin balance in your account will always be equal to the sum of initial margin deposit, realized P&L and unrealized P&L. Since the unrealized P&L is marked to market, it keeps fluctuating, as the prices of your trades change constantly. Due to this, the margin balance also keeps changing constantly.

    Calculating Profit and Loss
    The actual calculation of profit and loss in a position is quite straightforward. To calculate the P&L of a position, what you need is the position size and by how many pips the price has moved. The actual profit or loss will be equal to the position size multiplied by the pip movement.

     


    Assume that you have a 100,000 GBP/USD position currently trading at 1.6240. If the prices move from GBP/USD 1.6240 to 1.6255, then the prices have move up by 15 pips. For a 100,000 GBP/USD position, the 15 pips movement equates to USD 150 (100,000 x 15).

    To determine if it's a profit or loss, we need to know whether we were long or short for each trade.

    Long position: In case of a long position, if the prices move up, it will be a profit, and if the prices move down it will be a loss. In our earlier example, if the position is long GBP/USD, then it would be a USD 150 profit. Alternatively, if the prices had moved down from GBP/USD 1.6240 to 1.6220, then it will be a USD 200 loss (100,000 x -0.0020).

     


    Short position: In case of a short position, if the prices move up, it will be a loss, and if the prices move down it will be a profit. In the same example, if we had a short GBP/USD position and the prices moved up by 15 pips, it would be a loss of USD 150. If the prices moved down by 20 pips, it would be a USD 200 profit.

    The following table summarizes the calculation of P&L:

    100,000 GBP/USD

    Long position

    Short position

    Prices up 15 pips

    Profit $150

    Loss $150

    Prices down 20 pips

    Loss $200

    Profit $200

    Another aspect of the P&L is the currency in which it is denominated. In our example the P&L was denominated in dollars. However, this may not always be the case.


    In our example, the GBP/USD is quoted in terms of the number of USD per GBP. GBP is the base currency and USD is the quote currency. At a rate of GBP/USD 1.6240, it costs USD 1.6240 to buy one GBP. So, if the price fluctuates, it will be a change in the dollar value. For a standard lot, each pip will be worth USD 10, and the profit and loss will be in USD. As a general rule, the P&L will be denominated in the quote currency, so if it's not in USD, you will have to convert it into USD for margin calculations.

    Consider you have a 100,000 short position on USD/CHF. In this case your P&L will be denominated in Swiss francs. The current rate is roughly 0.9129. For a standard lot, each pip will be worth CHF 10. If the price has moved down by 10 pips to 0.9119, it will be a profit of CHF 100. To convert this P&L into USD, you will have to divide the P&L by the USD/CHF rate, i.e., CHF 100 / 0.9119, which will be USD 109.6611.

    Once we have the P&L values, these can easily be used to calculate the margin balance available in the trading account. Margin calculations are typically in USD.

    You will not have to perform these calculations manually because all brokerage accounts automatically calculate the P&L for all your trades. However, it is important that you understand these calculations as you will have to calculate your P&L and margin requirements while structuring your trade even before you actually enter the trade. Depending on how much leverage your trading account offers, you can calculate the margin required to hold a position. For example, if your have a leverage of 100:1, you will require a margin of $1,000 to open a standard lot position of 100,000 USD/CHF.

    The Bottom Line
    Having a clear understanding of how much money is at stake in each trade will help you manage your risk effectively.

     

    展开全文
  •  Risk of trading losses This is the most obvious and intuitive one—we want to trade to make money, but we always run through the risk of losing money against other market participants. Trading is a ...
  • Gieno Trade Stages - The Trade Path STARTS OFF“greed orientated.”Loses because:1 Market problems Not a zero sum game, a “very negative” sum gameMarket psychology – doing th...
  • Billionaire Eli Broad, wholl help shore up Goldman Sachs Group Inc.s Global Equity Opportunities Fund, said art prices will decline as a result of losses by hedge funds and other large contemporary
  • Gieno Trade Stages - Greed Orientated This stage is characterized by ignorance and the thought that the markets will provide “easy money.” The actual emotion driving the new ...
  • Gieno Trade Stages - Fear Orientation When the trader has realized that trading is not easy and that a lot of hard work is required, they possibly move to fear orientation st...
  • "Intellectual Property Rights" means any and all rights under patent law, copyright law, trade secret law, trademark law, and any and all other proprietary rights. Google reserves all rights not ...
  • Mini CFA 考试练习题 Industry Overview

    千次阅读 2019-10-07 21:28:56
    To protect individuals from investment losses B是正确的。 除其他外,法律法规旨在促进和维持金融市场的完整性,透明度和公平性。 A不正确。 许多法律法规本身就是政府干预金融交易的形式(例如,禁止内幕交易...
  • Apache License 2.0: 原文(英文/中文)

    千次阅读 2020-07-15 22:09:14
    work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty...
  • 开源协议:在项目中使用Apache License 2.0

    万次阅读 多人点赞 2020-07-15 23:01:11
    and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those ...
  • We evaluate the trade-offs between accuracy, and number of operations measured by multiply-adds (MAdd), as well as actual latency, and the number of parameters.  在本文中,我们描述了一种 新的...
  • 文章作者:Tyan ...|  CSDN  |  简书 声明:作者翻译论文仅为学习,如有侵权请联系作者删除博文,谢谢! ...ESRGAN: Enhanced Super-Resolution Generative Adversarial Networks ...The Super-Resolut
  • different methods successfully enforce properties “encouraged” by the corresponding losses, well-disentangled models seemingly cannot be identified without supervision . Furthermore, increased ...
  • 论文下载百度云链接:链接:https://pan.baidu.com/s/100OAXTIOTPoMjbi-dwOcxA ... 今天更新到2019年10月11号 目录 今天更新到2019年9月4号 Understanding the Representation Power of Graph Neural Networks i...
  • Beyond Trade-off: Accelerate FCN-based Face Detection with Higher Accuracy 261 Poster On the Robustness of Semantic Segmentation Models to Adversarial Attacks 266 Oral PWC-...
  • 论文下载百度云链接:链接:https://pan.baidu.com/s/100OAXTIOTPoMjbi-dwOcxA ... 今天更新到2019年9月6号 目录 今天更新到2019年9月4号 Understanding the Representation Power of Graph Neural Networks in ...
  • 交易策略 转

    2019-10-06 23:29:28
    Statistical Carry Trade Strategy [ru] Ruslan Lunev | 15 October, 2012 | Views: 274 Economic Regulators According to Adam Smith's theory set forth in his book "An Inquiry into the Nature and C...
  • Thankfully for 2017’s influx of investors, trading volume, and production of wealth, Coincheck should have ample reserves to cover their losses. Even without declaring bankruptcy and ushering an era...
  • TensorFlow 从入门到精通(六):tensorflow.nn 详解

    万次阅读 热门讨论 2016-05-23 14:30:06
    ops trade off between generic vs. specific filters: * `conv2d`: Arbitrary filters that can mix channels together. * `depthwise_conv2d`: Filters that operate on each channel independently. * `...
  • Since March 17 since the full liberalization,. HK domain and.... the needs of domain name registration to their own hands, to reduce unnecessary economic disputes and to avoid possible losses
  • The 8 days makes real differences, that is , tremendous losses to a big business like H&M which owns some 1,000 outlets in 21 countries worldwide.   To fulfill its commitment on providing "fashion ...
  • that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative ...
  • Over-the-counter(OTC)

    千次阅读 2013-07-03 10:31:32
    Having another entity reimburse losses, similar to the insurance, financial guarantee and credit derivatives markets. Obtaining the right of resourse to some asset of value that can be sold or the ...
  • 2010英语一text4

    千次阅读 2019-07-11 10:11:24
    But bank’s shares trade below their book value, suggesting that investors are skeptical. And dead markets partly reflect the paralysis of banks which will not sell assets for fear of booking losses,...
  • 贴出最新整理成数据库的英语谚语大全:1. "After you" is good manners. “您先请”是礼貌。2. A bad beginning makes a bad ending. 不善始者不善终。3. A bad bush is better than the open field....
  • In order to raise awareness of security precautions, keep accounts and passwords, account losses due to the theft of the game, the player will not be responsible for any compensation for the game ...

空空如也

空空如也

1 2 3 4 5 ... 20
收藏数 803
精华内容 321
关键字:

lossestrade