精华内容
下载资源
问答
  • 一般线性模型和混合线性模型 生命科学的数学统计和机器学习 (Mathematical Statistics and Machine Learning for Life Sciences) This is the seventeenth article from my column Mathematical Statistics and ...

    一般线性模型和混合线性模型

    生命科学的数学统计和机器学习 (Mathematical Statistics and Machine Learning for Life Sciences)

    This is the seventeenth article from my column Mathematical Statistics and Machine Learning for Life Sciences where I try to explain some mysterious analytical techniques used in Bioinformatics and Computational Biology in a simple way. Linear Mixed Model (LMM) also known as Linear Mixed Effects Model is one of key techniques in traditional Frequentist statistics. Here I will attempt to derive LMM solution from scratch from the Maximum Likelihood principal by optimizing mean and variance parameters of Fixed and Random Effects. However, before diving into derivations, I will start slowly in this post with an introduction of when and how to technically run LMM. I will cover examples of linear modeling from both Frequentist and Bayesian frameworks.

    这是我的《生命科学的数学统计和机器学习》专栏中的第十七篇文章,我试图以一种简单的方式来解释生物信息学和计算生物学中使用的一些神秘的分析技术。 线性混合模型(LMM)也称为线性混合效应模型,是传统频率统计中的关键技术之一。 在这里,我将尝试通过优化固定效应和随机效应的均值和方差参数,从最大似然原理从头开始 获得LMM解决方案。 但是,在深入探讨衍生工具之前,我将在本文中慢慢介绍何时以及如何在技术上运行LMM 。 我将介绍来自频繁框架和贝叶斯框架的线性建模示例。

    数据非独立性问题 (Problem of Non-Independence in Data)

    Traditional Mathematical Statistics is based to a large extent on assumptions of the Maximum Likelihood principal and Normal distribution. In case of e.g. multiple linear regression these assumptions might be violated if there is non-independence in the data. Provided that data is expressed as a p by n matrix, where p is the number of variables and n is the number of observations, there can be two types of non-independence in the data:

    传统的数学统计在很大程度上是基于最大似然原理和正态分布的假设 。 在例如多重线性回归的情况下,如果数据中存在非独立性 ,则可能会违反这些假设。 假设数据用ap x n矩阵表示,其中p是变量数,n是观察数,则数据中可以有两种类型的非独立性:

    • non-independent variables / features (multicollinearity)

      非独立变量/特征( 多重共线性 )

    • non-independent statistical observations (grouping of samples)

      非独立统计观察(样本分组)

    In both cases, the inverse data matrix needed for the solution of Linear Model is singular, because its determinant is close to zero due to correlated variables or observations. This problem is particularly manifested when working with a high-dimensional data (p >> n) where variables can become redundant and correlated, this is known as the Curse of Dimensionality.

    在这两种情况下,线性模型求解所需的数据矩阵都是奇异的 ,因为由于相关变量或观测值,其行列式接近于零。 当使用高维数据(p >> n)时,此问题尤其明显,其中变量可能变得多余且相关,这称为维数诅咒

    Image for post
    The Curse of Dimensionality: solution of linear model diverges in high-dimensional space, p >> n limit
    维度的诅咒:高维空间中线性模型的解发散,p >> n极限

    To overcome the problem of non-independent variables, one can for example select most informative variables with LASSO, Ridge or Elastic Net regression, while the non-independence among statistical observations can be taking into account via Random Effects modelling within the Linear Mixed Model.

    为了克服非独立变量的问题,例如可以使用LASSO ,Ridge或Elastic Net回归选择最具信息量的变量,而统计观测值之间的非独立性可以通过线性混合模型中的 随机效应建模加以考虑。

    Image for post
    image source图像源

    I covered a few variable selection methods including LASSO in my post Select Features for OMICs Integration. In the next section, we will see an example of longitudinal data where grouping of data points should be addressed through the Random Effects modelling.

    在我的文章《 用于OMIC集成的选择功能》中,我介绍了一些变量选择方法,包括LASSO。 在下一部分中,我们将看到一个纵向数据的示例,其中应通过随机效应建模解决数据点分组的问题。

    LMM and Random Effects modeling are widely used in various types of data analysis in Life Sciences. One example is the GCTA tool that contributed a lot to the research of long-standing problem of Missing Heritability. The idea of GCTA is to fit genetic variants with small effects all together as Random Effect withing LMM framework. Thanks to the GCTA model the problem of Missing Heritability seems to be solved at least for Human Height.

    LMM和随机效应建模广泛用于生命科学中的各种类型的数据分析。 GCTA工具就是一个例子,它为长期遗留 遗传力问题的研究做出了很大贡献。 GCTA的想法是将具有较小影响的遗传变异体与具有LMM框架的随机效应结合在一起。 多亏了GCTA模型,遗传力缺失的问题似乎至少对于人类身高可以解决

    Image for post
    B.Maher, Nature, volume 456, 2008B.Maher,《自然》,第456卷,2008年

    Another popular example from computational biology is the Differential Gene Expression analysis with DESeq / DESeq2 R package that does not really run LMM but performs a variance stabilization/shrinkage that is one of essential points of LMM. The advantage of this approach is that lowly expressed genes can borrow some information from the highly expressed genes that allows for their more stable and robust testing.

    来自计算生物学的另一个流行示例是使用DESeq / DESeq2 R软件包进行的差异基因表达分析, 软件包不能真正运行LMM,但可以执行方差稳定化 /收缩,这是LMM的关键点之一。 这种方法的优势在于,低表达的基因可以从高表达的基因中借用一些信息,从而可以进行更稳定,更可靠的测试。

    Finally, LMM is one of the most popular analytical techniques in Evolutionary Science and Ecology where they use the state-of-the-art MCMCglmm package for estimating e.g. trait heritability.

    最后,LMM是进化科学和生态学中最流行的分析技术之一,它们使用最先进的MCMCglmm软件包来估计例如性状遗传力

    数据非独立性的示例 (Example of Non-Independence in Data)

    As we concluded previously, LMM should be used when there is some sort of clustering among statistical observations / samples. This can be, for example, due to different geographic locations where the samples were collected, or different experimental protocols that produced the samples. Batch-effects in Biomedical Sciences is an example of such a grouping factor that leads to non-independence between statistical observations. If not properly corrected for, batch-effects in RNAseq data can lead to totally opposite co-expression pattern between two genes (Simpson’s paradox).

    正如我们先前得出的结论,当统计观测值/样本之间存在某种聚类时,应使用LMM。 例如,这可能是由于收集样品的地理位置不同或产生样品的实验方案不同所致。 生物医学科学中的批量效应就是这种分组因子的一个例子,这种分组因子导致了统计观察结果之间的独立性。 如果不能正确校正,RNAseq数据中的批处理效应可能导致两个基因之间完全相反的共表达模式( 辛普森悖论 )。

    Image for post

    Another example can be a genetic relation between individuals. Finally, this can be repetitive measurements performed on the same individuals but at different time points, i.e. technical (not biological) replicates.

    另一个例子可以是个体之间的遗传关系 。 最后,这可以是对同一个人但在不同时间点进行的重复测量 ,即技术(非生物学)重复。

    As an example of such clustering, we will consider a sleep deprivation study where sleeping time of 18 individuals was restricted, and a Reaction of their organism on a series of tests was measured during 10 days. The data includes three variables: 1) Reaction, 2) Days, 3) Subject, i.e. the same individual was followed during 10 days. To check how the overall reaction of the individuals changed as a response to the sleep deprivation, we will fit an Ordinary Least Squares (OLS) Linear Regression with Reaction as a response variable and Days as a predictor / explanatory variable with lm and display it with ggplot.

    作为此类聚类的一个示例,我们将考虑一项睡眠剥夺研究 ,其中限制了18个人的睡眠时间,并在10天内测量了他们的生物在一系列测试中的React 。 数据包括三个变量:1)React,2)天,3)对象,即在10天内跟踪了同一个人。 为了检查个体的整体React如何随着对睡眠剥夺的React而变化,我们将lm拟合为普通最小二乘(OLS)线性回归,其中React为React变量,天为预测变量/解释变量,并将其显示为ggplot

    Image for post
    Image for post

    We can observe that Reaction vs. Days has a increasing trend but with a lot of variation between days and individuals. Looking at the summary of the linear regression fit, we conclude that the slope is significantly different from zero, i.e. there is a statistically significant increasing relation between Reaction and Days. The grey area around the fitting line represents 95% confidence interval according to the formula:

    我们可以观察到,Respons vs. Days呈上升趋势,但天与个人之间存在很大差异。 查看线性回归拟合的摘要,我们得出结论,斜率与零显着 不同 ,即,“React”和“天”之间存在统计上显着的增加关系。 拟合线周围的灰色区域表示根据公式的95%置信区间:

    Image for post

    The magic number 1.96 originates from the Gaussian distribution and reflects a Z-score value covering 95% of the data in the distribution. To demonstrate how the confidence intervals are calculated under the hood by ggplot we will implement an identical Linear Regression fit in plain R using predict function.

    幻数1.96来自高斯分布,反映的Z分数覆盖分布中95%的数据 。 为了演示如何通过ggplot在引擎盖下计算置信区间,我们将使用预测函数在平原R中实现相同的线性回归拟合。

    Image for post

    However, there is a problem with the fit above. The Ordinary Least Squares (OLS) assumes that all the observations are independent, which will result in uncorrelated and hence Normally distributed residuals. However, we know that the data points on the plot belong to 18 individuals (10 for each), i.e. the data points cluster within individuals and therefore are not independent. As an alternative way, we can fit linear model (lm) for each individual separately.

    但是, 上面的拟合存在问题 。 普通最小二乘(OLS)假设所有观测值都是独立的 ,这将导致不相关且因此呈正态分布的残差 。 但是,我们知道该图上的数据点属于18个个体(每个个体10个),即,数据点在个体内成簇 ,因此不是独立的 。 作为一种替代方法,我们可以分别拟合每个人的线性模型( lm )。

    Image for post

    We can see that most of the individuals have increasing Reaction profile while some have a neutral or even decreasing profile. Doesn’t it looks strange that the overall Reaction increases while individual slopes might be decreasing? Is the fit above really good enough?

    我们可以看到,大多数人的React曲线都在增加,而有些人则是中性甚至 下降 。 整体React增加而单个斜率可能正在减少,这看起来并不奇怪吗? 上面的合身真的足够好吗?

    Did we capture all the variation in the data with the naive Ordinary Least Squares (OLS) Linear Regression model?

    我们是否使用朴素的普通最小二乘(OLS)线性回归模型捕获了数据中的所有变化

    The answer is NO because we have not taken the non-independence between data points into account. As we will see later, we can do it much better with a Linear Mixed Model (LMM) that accounts for non-independence between the samples via Random Effects. Despite the term “Random Effects” might sound mysterious, we will show below that it is essentially equivalent to introducing one more fitting parameter in the Maximum Likelihood optimization.

    答案是否定的,因为我们没有考虑数据点之间的非独立性。 正如我们将在后面看到的,我们可以使用线性混合模型(LMM)来做得更好,该模型通过随机效应考虑了样本之间的非独立性。 尽管“随机效应”一词听起来似乎很神秘,但我们将在下面显示它基本上等效于在“最大似然性”优化中引入一个更合适的参数

    频率线性混合模型 (Frequentist Linear Mixed Model)

    The naive linear fit that we used above is called Fixed Effects modeling as it fixes the coefficients of the Linear Regression: Slope and Intercept. In contrast Random Effects modeling allows for individual level Slope and Intercept, i.e. the parameters of Linear Regression are no longer fixed but have a variation around their mean values.

    我们上面使用的幼稚线性拟合称为固定效果 建模,因为它固定了线性回归的系数:斜率和截距。 相反,“随机效应”建模允许单个级别的“斜率”和“截距”,即线性回归的参数不再固定,而是在其平均值附近有所变化

    Image for post
    Variation of intercepts and slopes between individuals from the sleep study
    睡眠研究中个体之间的截距和斜率变化

    This concept reminds a lot about Bayesian statistics where the parameters of a model are random while the data is fixed, in contrast to Frequentist approach where parameters are fixed but the data is random. Indeed, later we will show that we obtain similar results with both Frequentist Linear Mixed Model and Bayesian Hierarchical Model. Another strength of LMM and Random Effects is that the fit is performed on all individuals simultaneously in the context of each other, that is all individual fits “know” about each other. Therefore, the slopes, intercepts and confidence intervals of the individual fits are affected by their common statistic, shared variance, this is called shrinkage toward the mean, we will cover it in more details when deriving LMM from scratch in the next post.

    这个概念使人们想起了很多有关贝叶斯统计的问题 ,其中,在数据固定的情况下模型的参数是随机的,与在参数固定但数据是随机的情况下的频繁性方法相反。 的确,稍后我们将证明,使用“ 频繁线性混合模型”和“ 贝叶斯 层次模型”都可以获得类似的结果。 LMM和随机效应的另一个优势是,在彼此的上下文中同时对所有个体执行拟合,也就是说,所有个体彼此“了解” 。 因此,各个拟合的斜率,截距和置信区间受其共同统计量( 共享方差)影响 ,这被称为“均值收缩” ,在下一篇文章中从零开始导出LMM时,我们将更详细地介绍它。

    We will fit LMM with random slopes and intercepts for the effect of Days for each individual (Subject) using lmer function from lme4 R package. This will correspond to adding the (Days | Subject) term to the linear model Reaction ~ Days that was previously used inside the lm function.

    我们将使用lme4 R包中的lmer函数,为LMM拟合随机斜率并截取天数对每个个体(对象)的影响。 这将对应于将(Days | Subject)项添加到lm函数之前使用的线性模型Reaction〜Days中。

    Image for post

    We can immediately see two types of statistics reported: Fixed and Random Effects. The slope and intercept values for Fixed Effects look fairly similar to the ones obtained above with the OLS Linear Regression. On the other hand, the Random Effects statistics is where the adjustment for non-independence between samples occurs. We can see two types of variance reported: the one shared across slopes and intercepts, Name = (Intercept) and Name = Days, that reflects grouping the data points by Subject, and a Residual variance that remains un-modelled, i.e. we can not further reduce this variance within the given model. Further, comparing the Residual errors between Fixed (lm) and Random (lmer) effects models, we can see that the Residual error decreased for the Random Effects model meaning that we captured more variation in the response variable with the Random Effects model. The same conclusion can be drawn from comparing AIC and BIC values for the two models, again the LMM with Random Effects simply fits the data better. Now let us visualize the difference between Fixed Effects modeling vs. LMM modeling.

    我们可以立即看到报告的两种统计信息: 固定效应和随机效应 。 固定效果的斜率和截距值看起来上面使用OLS线性回归获得的斜率和截距值非常 相似 。 另一方面,“随机效应”统计是在样本之间进行非独立性调整的地方。 我们可以看到报告了两种类型的方差 :一种是跨坡度和截距共享的 ,即Name =(Intercept)和Name = Days,它反映了按Subject对数据点进行分组,而Residual方差仍未建模,即我们无法在给定模型内进一步减小这种差异。 此外,通过比较固定效应( lm )和随机效应( lmer )模型之间的残差,我们可以看到随机效应模型的残差降低了,这意味着我们用随机效应模型捕获 了响应变量中的更多变化 。 通过比较两个模型的AIC BIC 可以得出相同的结论,具有随机效应的LMM再次可以更好地拟合数据。 现在让我们可视化固定效果建模与LMM建模之间的区别。

    For this reason, we need to visualize confidence intervals of the LMM model. The standard way to build confidence intervals in the Frequentist / Maximum Likelihood framework is via bootstrapping. We will start with the population level (overall / average) fit, and re-run it a number of times using resampling with replacement and randomly removing 75% of samples for each iteration. At each iteration I am going to save LMM fit statistics. After the bootstrapped statistics have been accumulated, I am going to make two plots: first, showing bootstrapped LMM fits against the naive Fixed Effects fit used in the previous section; second, from the accumulated bootstrapped LMM fits I will compute the median, i.e. 50% percentile, as well as 5% and 95% percentiles that will determine the confidence intervals of the population level LMM fit, this will again be plotted versus the naive Fixed Effects fit.

    因此,我们需要可视化LMM模型的置信区间 。 在频繁/最大可能性框架中建立置信区间的标准方法是通过引导 。 我们将从总体水平(总体/平均)拟合开始,并使用替换进行重采样并在每次迭代中随机删除75%的样本来重新运行多次。 在每次迭代中,我将保存LMM拟合统计信息。 在累积了自举统计信息之后,我将作两个图:首先,显示自举LMM与上一节中使用的天真的“固定效果”拟合的拟合;以及 第二,从累积的自举LMM拟合 ,我将计算中位数 ,即50%百分位数,以及将确定总体水平LMM拟合的置信区间的5%和95%百分位数,将再次与朴素的固定值作图效果合适。

    Image for post
    Fixed Effects (blue line, grey area) vs. bootstrapped LMM (black and red lines).
    固定效果(蓝线,灰色区域)与自举LMM(黑线和红线)。

    Above, the Fixed Effects fit (blue line + grey 95% confidence intervals area) is displayed together with the computed bootstrapped LMM fits (left plot), and the summary statistics (percentiles) of the bootstrapped LMM fits (right plot). We can observe that the population level LMM fit (lmer, red line, right plot) is very similar to the Fixed Effects fit (lm, blue line on both plots), the difference is hardly noticeable, they overlap well. However, the computed bootstrapped fits (black thick lines, left plot) and confidence intervals for LMM (red dashed line, right plot) are a bit wider than for the Fixed Effects fit (grey area on both plots). This difference is partially due to the fact that the Fixed Effects fit does not account for individual level variation in contrast to LMM that accounts for both population and individual level variations.

    上方显示了固定效果拟合(蓝线+ 95%的灰色置信区间灰色区域)以及计算出的自举LMM拟合(左图)和自举LMM拟合的摘要统计量(百分数)(右图)。 我们可以观察到,总体水平的LMM拟合(lmer,红线,右图)与固定效应拟合(两个图上的lm,蓝线)非常相似,差异几乎不明显,它们重叠得很好。 但是,LMM的计算的自举拟合(黑色粗线,左侧图)和置信区间(红色虚线,右侧图)比固定效果拟合(两个图上的灰色区域) 一些。 这种差异部分是由于固定效应拟合不考虑个体水平差异,而LMM却同时考虑了人口和个体水平差异。

    Another interesting thing is that we observe variations of Slope and Intercept around their mean values:

    另一个有趣的事情是,我们观察到“斜率”和“截距”均值附近的变化:

    Image for post

    Therefore, one can hypothesize that the bootstrapping procedure for building confidence intervals within Frequentist framework can be viewed as allowing the slopes and intercepts to follow some initial (Prior) distributions, and then sampling their plausible values from the distributions. This sounds much like Bayesian statistics. Indeed, bootstrapping is very similar to the working horse of Bayesian statistics which is Markov Chain Monte Carlo (MCMC). In other words, Frequentist analysis with bootstrapping is to a large extent equivalent to Bayesian analysis, we will revisit this later in more details.

    因此,可以假设在Frequentist框架内建立置信区间的引导过程可以看作是允许斜率和截距遵循某些初始( Prior )分布,然后从分布中采样其合理值。 这听起来很像贝叶斯统计。 确实,自举非常类似于贝叶斯统计工作的马可夫链蒙特卡洛(MCMC) 。 换句话说,带有自举的频繁分析在很大程度上等效于贝叶斯分析,我们将在后面更详细地讨论。

    What about individual slopes, intercepts and confidence intervals for each of the 18 individuals from the sleep deprivation study? Here we again plot their Fixed Effects statistics together with the LMM statistics.

    睡眠剥夺研究的18个人中每个人的个人斜率,截距和置信区间如何? 在这里,我们再次绘制其固定效应统计数据和LMM统计数据。

    Image for post

    Again, red solid and dashed lines correspond to the LMM fit while blue solid line and the grey area depict Fixed Effects Model. We can see that individual LMM fits (lmer) and their confidence intervals might be very different from the Fixed Effects (lm) model. In other words the individual fits are “shrunk” toward their common population level mean / median, all the fits help each other to have more stable and resembling population level slopes, intercepts and confidence intervals. In the next post, when deriving LMM from scratch, we will understand that this shrinkage toward the mean effect is achieved by adding one more fitting parameter (shared variance) in the Maximum Likelihood optimization procedure.

    同样,红色实线和虚线对应于LMM拟合,而蓝色实线和灰色区域则表示固定效果模型。 我们可以看到,单个LMM拟合(lmer)及其置信区间可能与固定效应(lm)模型有很大不同 。 换句话说,个体拟合将“收缩”至其共同的人口水平均值/中位数,所有拟合都相互帮助,从而使人口水平斜率,截距和置信区间更加稳定和相似。 在下一篇文章中,当从零开始推导LMM时,我们将理解,通过在最大似然优化过程中添加一个更多的拟合参数(共享方差)来实现向均值效果的缩小

    频繁/最大似然与贝叶斯拟合 (Frequentist / Maximum Likelihood vs. Bayesian Fit)

    Before moving to the Bayesian Multilevel Models, let us briefly introduce the major differences between Frequentist and Bayesian approaches. Frequentist fit used by LMM through lme4 / lmer is based on the Maximum Likelihood principle, where we maximize the likelihood L(y) of observing the data y, which is equivalent to minimizing residuals of the model, the Ordinary Least Squares approach. In contrast, Bayesian linear model is based on Maximum Posterior Probability principle, where we assume the data to be distributed with some likelihood L(y), and add a Prior assumption on the parameters of the Linear Model.

    在转到贝叶斯多级模型之前,让我们简要介绍一下频率和贝叶斯方法之间的主要区别。 LMM通过lme4 / lmer使用的频繁拟合是基于最大似然原理,其中我们将观察数据y可能性L(y)最大化,这相当于最小化模型的残差(普通最小二乘法)。 相反,贝叶斯线性模型基于最大后验概率原理,在该模型中,我们假设数据以某种似然L(y)分布,并在线性模型的参数上添加了一个先验假设。

    Image for post

    Here we calculate a probability distribution of parameters (and not the data) of the model, which automatically gives us uncertainties (Credible Intervals) on the parameters.

    在这里,我们计算模型参数(而不是数据)的概率分布,这将自动为我们提供参数的不确定性(可信区间)。

    贝叶斯多层次模型 (Bayesian Multilevel Model)

    Linear Mixed Models (LMM) with Bayesian Prior distributions applied to the parameters are called Bayesian Multilevel Models or Bayesian Hierarchical Models. Here, for implementing Bayesian fitting, we will use brms R package that has an identical to lme4 / lmer syntax. However, an important difference to remember is that fitting LMM via lme4 / lmer applies Maximum Likelihood (ML) principle, i.e. it does not use prior assumptions about the parameters (or one case say, it uses flat Priors), while Bayesian Multilevel Model in brms sets reasonable priors reflecting the data. Another thing worth mentioning is that brms uses probabilistic programming language Stan under the hood. We start with Bayesian population level fitting using brms and display the results:

    贝叶斯 先验分布应用于参数的线性混合模型(LMM)称为贝叶斯多级模型贝叶斯层次模型 。 在这里,为了实现贝叶斯拟合,我们将使用与lme4 / lmer语法相同的brms R包。 但是,要记住的一个重要区别是,通过lme4 / lmer拟合LMM应用了最大似然(ML)原理,即,它不使用关于参数的先验假设(或者说,它使用平坦的先验),而贝叶斯多级模型用于brms设置反映数据的合理先验。 值得一提的另一件事是, brms在后台使用了概率编程语言Stan 。 我们从使用brms的贝叶斯总体水平拟合开始,并显示结果:

    Image for post

    Above, we again plot the Fixed Effects population level fit by the blue line and grey area for confidence intervals, we also add the population level Bayesian Multilevel Model using the solid red line for median and red dashed lines for credible intervals. As for the case of bootstrapped LMM fit, we can conclude that population level Bayesian Multilevel fit perfectly overlaps with the Fixed Effects fit, while the Bayesian credible intervals are somewhat wider than the 95% confidence intervals of the Fixed Effects fit. What about individual fits?

    上图,我们再次画出了固定效果人口水平,用置信区间的蓝线和灰色区域拟合,我们还使用实心红色线表示中值,红色虚线表示可信区间 ,添加了人口级别贝叶斯多级模型 。 对于自举LMM拟合,我们可以得出结论:总体水平的贝叶斯多层拟合与固定效应拟合完全重叠 ,而贝叶斯可信区间比固定效应拟合的95%置信区间稍宽 。 那个人适合呢?

    Image for post

    Similarly to the individual bootstrapped Frequentist LMM fits, we can see that the individual Bayesian fits with brms (red solid lines) do not always converge to the Fixed Effects Frequentist fits (blue solid lines), but rather “try” to align with the overall population level fit (previous plot) in order to be as similar to each other as possible. The Bayesian credible intervals look again sometimes very different compared to the Frequentist Fixed Effects confidence intervals. This is the result of using Bayesian Priors and accounting for non-normality and non-independence in the data via the multi-level modeling.

    类似于各个自举的Frequentist LMM拟合,我们可以看到带有brms的单个Bayes拟合(红色实线)并不总是收敛于Fixed Effects Frequentist拟合(蓝色实线),而是“尝试”以与总体对齐总体水平拟合(上一个图),以便彼此尽可能相似 。 与贝叶斯固定效应置信区间相比,贝叶斯可信区间有时有时看起来也非常不同 。 这是使用贝叶斯先验并通过多级建模解决数据中非正常性和非独立性的结果。

    摘要 (Summary)

    In this post, we have learnt that the Frequentist Linear Mixed Model (LMM) and Bayesian Multilevel (Hierarchical) Model are used to account for non-independence and hence non-normality of data points. The models usually provide a better fit and explain more variation in the data compared to the Ordinary Least Squares (OLS) linear regression model (Fixed Effect). While the population level mean fit of the models typically converges to the Fixed Effect model, the individual fits as well as credible and confidence intervals can be very different reflecting better accounting for non-normality in data.

    在这篇文章中,我们了解到,频繁线性混合模型(LMM)和贝叶斯多层次(分层)模型用于说明数据点的非独立性和非正态性。 与普通最小二乘(OLS)线性回归模型(固定效应)相比,该模型通常可提供更好的拟合度并解释数据的更多变化 。 尽管模型的总体水平均值拟合通常收敛于固定效应模型,但个体拟合以及可信度和置信区间可以非常不同,这反映了对数据非正态性的更好解释。

    In the comments below, let me know which analytical techniques from Life Sciences seem especially mysterious to you and I will try to cover them in the future posts. Check the codes from the post on my Github. Follow me at Medium Nikolay Oskolkov, in Twitter @NikolayOskolkov and do connect in Linkedin. In the next post, we are going to derive the Linear Mixed Model and program it from scratch from the Maximum Likelihood, stay tuned.

    在下面的评论中,让我知道生命科学的哪些分析技术对您来说似乎特别神秘 ,我将在以后的文章中尝试介绍它们。 在我的Github上检查帖子中的代码。 跟随我在中型Nikolay Oskolkov,在Twitter @NikolayOskolkov上进行连接,并在Linkedin中进行连接。 在下一篇文章中,我们将导出线性混合模型,并从“最大似然”中重新编程 ,敬请期待。

    翻译自: https://towardsdatascience.com/how-linear-mixed-model-works-350950a82911

    一般线性模型和混合线性模型

    展开全文
  • 一般线性模型和混合线性模型 生命科学的数学统计和机器学习 (Mathematical Statistics and Machine Learning for Life Sciences) This is the eighteenth article from the column Mathematical Statistics and ...

    一般线性模型和混合线性模型

    生命科学的数学统计和机器学习 (Mathematical Statistics and Machine Learning for Life Sciences)

    This is the eighteenth article from the column Mathematical Statistics and Machine Learning for Life Sciences where I try to explain some mysterious analytical techniques used in Bioinformatics and Computational Biology in a simple way. Linear Mixed Model (also called Linear Mixed Effects Model) is widely used in Life Sciences, there are many tutorials showing how to run the model in R, however it is sometimes unclear how exactly the Random Effects parameters are optimized in the likelihood maximization procedure. In my previous post How Linear Mixed Model Works I gave an introduction to the concepts of the model, and in this tutorial we will derive and code the Linear Mixed Model (LMM) from scratch applying the Maximum Likelihood (ML) approach, i.e. we will use plain R to code LMM and compare the output with the one from lmer and lme R functions. The goal of this tutorial is to explain LMM “like for my grandmother” implying that people with no mathematical background should be able to understand what LMM does under the hood.

    这是生命科学的数学统计和机器学习专栏中的第18条文章,我试图以一种简单的方式来解释一些在生物信息学和计算生物学中使用的神秘分析技术。 线性混合模型 (也称为线性混合效应模型)在生命科学中被广泛使用,有许多教程展示了如何在R中运行模型,但是有时不清楚在似然最大化过程中如何精确优化随机效应参数。 在我以前的文章《线性混合模型的工作原理》中,我介绍了模型的概念,在本教程中,我们将使用最大似然(ML)方法从头获得并编码线性混合模型(LMM),即我们将使用普通R编码LMM并将输出与lmerlme R函数的输出进行比较。 本教程的目的是“像祖母一样”解释LMM,这意味着没有数学背景的人应该能够理解LMM 幕后的工作

    玩具数据集 (Toy Data Set)

    Let us consider a toy data set which is very simple but still keeps all necessary elements of the typical setup for Linear Mixed Modelling (LMM). Suppose we have only 4 data points / samples: 2 originating from Individual #1 and the other 2 coming from Individual #2. Further, the 4 points are spread between two conditions: untreated and treated. Let us assume we measure a response (Resp) of each individual to the treatment, and would like to address whether the treatment resulted in a significant response of the individuals in the study. In other words, we are aiming to implement something similar to the paired t-test and assess the significance of treatment. Later we will relate the outputs from LMM and paired t-test and show that they are indeed identical. In the toy data set, 0 in the Treat column implies “untreated”, and 1 means “treated”. First, we will use a naive Ordinary Least Squares (OLS) linear regression that does not take relatedness between the data points into account.

    让我们考虑一个非常简单的玩具数据集 ,但它仍然保留了线性混合建模(LMM)典型设置的所有必要元素。 假设我们只有4个数据点 /样本 :2 个数据源于#1个人 ,另外2 个数据源于#2个人 。 此外,这四个点分布在两个条件之间: 未处理和已处理 。 让我们假设我们测量了每个个体对治疗的React( Resp ),并想说明治疗是否导致研究中个体的显着React。 换句话说,我们的目标是实施类似于 配对t检验的方法,并评估治疗的重要性。 稍后,我们将把LMM和配对t检验的输出相关联,并证明它们确实是 相同的 。 在玩具的数据集,0在款待列意味着“未处理”,1分表示“经处理的”。 首先,我们将使用朴素的普通最小二乘(OLS) 线性回归 ,该回归不考虑数据点之间的相关性。

    Image for post
    Image for post

    Technically it works, however, this is not a good fit, we have a problem here. Ordinary Least Squares (OLS) linear regression assumes that all observations (data points on the plot) are independent, that should result in uncorrelated and hence normally distributed residuals. However, we know that the data points on the plot belong to 2 individuals, i.e. 2 points for each individual. In principal, we can fit a linear model for each individual separately. However, this is not a good fit either. We have two points for each individual, so too few to make a reasonable fit for each individual. In addition, as we saw previously individual fits do not say much about the overall / population profile as some of them may have opposite behavior compared to the rest of individual fits.

    从技术上讲,它可以正常工作,但是,这不是一个很好的选择, 我们 在这里 遇到了问题 。 普通最小二乘(OLS)线性回归假设所有观测值(图中的数据点)都是独立的 ,这将导致不相关且因此呈正态分布的残差 。 但是,我们知道图中的数据点属于2个个体,即每个个体2个点。 原则上,我们可以为每个人 分别拟合线性模型。 但是,这也不是一个很好的选择。 每个人都有两个要点,因此太少而不能合理地适合每个人。 此外,正如我们之前看到的那样,个体拟合并没有对总体/人口状况说太多,因为与其他个体拟合相比,其中一些可能具有相反的行为。

    Image for post

    In contrast, if we want to fit all the four data points together we will need to somehow account for the fact that they are not independent, i.e. two of them belong to the Individual #1 and two belong to the Individual #2. This can be done within the Linear Mixed Model (LMM) or a paired test, e.g. paired t-test (parametric) or Mann-Whitney U test (non-parametric).

    相反,如果我们想将所有四个数据点 拟合 在一起 ,则需要以某种方式说明它们不是独立的 ,即其中两个属于个人#1,两个属于个人#2。 这可以在线性混合模型(LMM)或配对检验中完成,例如配对t检验 (参数)或Mann-Whitney U检验 (非参数)。

    具有Lmer和Lme的线性混合模型 (Linear Mixed Model with Lmer and Lme)

    We use LMM when there is a non-independence between observations. In our case, the observations cluster within individuals. Let us apply LMM with Fixed Effects for slopes and intercepts and Random Effects for intercepts, this will result in adding a (1 | Ind) term to the Resp ~ Treat formula:

    当观测值之间存在非独立性时,我们使用LMM。 在我们的案例中,观察结果聚集在个人内部。 让我们将LMM与“固定效应”应用于斜率和截距,将“ 随机效应”应用于截距 ,这将导致在Resp〜Treat公式中添加(1 | Ind)项:

    Image for post

    Here REML=FALSE simply means that we are using the traditional Maximum Likelihood (ML) optimization and not Restricted Maximum Likelihood (we will talk about REML another time). In the Random Effects section of the lmer output, we see estimates for 2 parameters of minimization: residual variance corresponding to the standard deviation (Std.Dev.) of 4.243, and the random effects (shared between individuals) variance associated with the Intercept with the standard deviation of 5.766. Similarly, in the Fixed Effects section of the lmer output we can see two estimates for: 1) Intercept equal to 6.5, and 2) Slope / Treat equal to 9. Therefore, we have 4 parameters of optimization that correspond to 4 data points. The values of Fixed Effects make sense if we look at the very first figure in the Toy Data Set section and realize that the mean of two values for untreated samples is (3 + 10) / 2 =6.5, we will denote it as β1, and the mean of treated samples is (6 + 25) / 2 = 15.5, let us denote it as β2. The latter would be equivalent to 6.5 + 9, i.e. the estimate for the Fixed Effect of Intercept (=6.5) plus the estimate for the Fixed Effect of Slope (=9). Here, we pay attention to the exact values of Random and Fixed effects because we are going to reproduce them later when deriving and coding LMM.

    这里REML = FALSE只是意味着我们使用的是传统的最大似然(ML)优化,而不是受限的最大似然 (我们将在下一次讨论REML)。 在lmer输出的“随机效应”部分中,我们看到了两个最小化参数的估计值:与4.243的标准偏差(Std.Dev。)相对应的剩余方差 ,以及与截距相关的随机效应(在个体之间共享)方差标准偏差为5.766。 同样,在lmer输出的“固定效果”部分中,我们可以看到两个估计值:1)截距等于6.5,和2)斜率/对待等于9。因此,我们有4个优化参数对应于4个数据点。 如果我们看一下“玩具数据集”部分的第一个数字,并且意识到未处理的样本的两个值的平均值为(3 + 10)/ 2 = 6.5,则固定效果的值就有意义,我们将其表示为β1 ,并且处理后样本的平均值为(6 + 25)/ 2 = 15.5,我们将其表示为β2 。 后者等于6.5 + 9,即截距固定效应的估计值(= 6.5)加上坡度固定效应的估计值(= 9)。 在这里,我们注意随机和固定效果的确切值 ,因为稍后将在派生和编码LMM时重现它们。

    By default, lme4 R package and lmer R function do not provide measures of statistical significance such as p-values, however if you still would like to have a p-value of your LMM fit, it is possible to use lme function from the nlme R package:

    默认情况下, lme4 R包和lmer R函数不提供统计意义的度量 ,例如p值,但是,如果您仍然希望LMM拟合p值 ,则可以使用nlme中的 lme函数R包:

    Image for post

    Again, here we have Random Effects for Intercept (StdDev = 5.766264) and Residual (StdDev = 4.242649), and Fixed Effects for Intercept (Value = 6.5) and Slope / Treat (Value = 9). Quite interesting, the standard errors of Fixed Effects and hence their t-values (t-value=1.5 for Treat) do not agree between lmer and lme. However, if we demand REML=TRUE in the lmer function, the Fixed Effects statistics including t-values are identical between lme and lmer, however the Random Effects statistics are different.

    再次,这里我们具有截取的随机效应( StdDev = 5.766264 )和残差( StdDev = 4.242649 ),以及截距的固定效应(值= 6.5)和斜率/治疗(值= 9)。 非常有趣的是,固定效应的标准误差及其t值(“治疗”的t值= 1.5)在lmer和lme之间不一致。 但是,如果我们在lmer函数中要求REML = TRUE,则lme和lmer之间的固定效应统计量(包括t值)是相同的 ,但是随机效应统计量却不同。

    Image for post

    This is the difference between the Maximum Likelihood (ML) and Restricted Maximum Likelihood (REML) approaches that we will cover next time.

    这是下一次我们将讨论的最大可能性(ML)和受限最大可能性(REML)方法之间的差异。

    LMM和配对T检验之间的关系 (Relation Between LMM and Paired T-Test)

    Previously, we said that LMM is a more complex form of a simple paired t-test. Let us demonstrate that for our toy data set they do give identical outputs. On the way, we will also understand the technical difference between paired and un-paired t-tests. Let us first run the paired t-test between the treated and un-treated groups of samples taking into account the non-independence between them:

    之前,我们说过LMM是简单配对t检验的更复杂形式。 让我们证明,对于我们的玩具数据集,它们确实提供了相同的输出。 在途中,我们还将了解配对和非配对t检验之间的技术差异。 考虑到样本之间的非独立性,让我们首先在已处理样本组和未处理样本组之间进行配对t检验:

    Image for post

    We can see that the t-value=1.5 and p-value = 0.3743 reported by the paired t-test are identical to the ones obtained by LMM using the nlme function or lmer with REML = TRUE. The reported by the paired t-test statistic “mean of the differences = 9” also agrees with the Fixed Effect estimates from lmer and nlme, remember we had the Treat Estimate = 9 that was simply the difference between the means of the treated and untreated samples.

    我们可以看到,配对t检验报告的t值= 1.5和p值= 0.3743 与LMM使用nlme函数或lmer的REML = TRUE获得的值相同。 配对t检验统计数据所报告的“差异均值= 9”也与lmer和nlme的“固定效应”估算值一致,请记住,我们的“治疗估算值” = 9,这仅仅是治疗与未治疗方法之间的差异样品。

    Now, what is the paired t-test exactly doing? Well, the idea of the paired t-test is to make the setup look like a one-sample t-test where values in one group are tested for significant deviation from zero, which is a sort of the mean of the second group. In other words, we can view a paired t-test as if we shift the intercepts of the individual fits (see the very first figure) or the mean values of the untreated group down to zero. In a simple way, this would be equivalent to subtracting untreated Resp values from the treated ones, i.e. transforming the Resp variable to Resp_std (standardized Response) as shown below, and then performing an un-paired t-test on the Resp_std variable instead of Resp:

    现在,配对t检验到底在做什么? 好吧,配对t检验的想法是使设置看起来像一个单样本t检验 ,其中一组中的值被测试是否与零显着偏离 ,这是第二组的平均值。 换句话说,我们可以查看成对的t检验,就好像我们将各个拟合的截距(参见第一个数字)或未处理组的平均值减小到零一样 。 以一种简单的方式,这等效于从已处理的值中减去未处理的Resp值,即如下所示将Resp变量转换为Resp_std (标准响应),然后对Resp_std变量执行未配对的t检验,而不是响应:

    Image for post

    We observe that the values of Response became 0 for Treat = 0, i.e. untreated group, while the Response values of the treated group (Treat=1) are reduced by the values of the untreated group. Then we simply used the new Resp_std variable and ran an un-paired t-test, the result is equivalent to running paired t-test on the original Resp variable. Therefore, we can summarize that LMM reproduces the result of the paired t-test but allows for much more flexibility, for example, not only two (like for t-test) but multiple groups comparison etc.

    我们观察到,对于Treat = 0(即未治疗组),Response的值变为0,而治疗组(Treat = 1)的Response值减少了未经治疗组的值。 然后,我们只需使用新的Resp_std变量并运行未配对的t检验,结果就相当于在原始Resp变量上运行配对的t检验。 因此,我们可以总结出,LMM再现了配对t检验的结果,但是允许更大的灵活性,例如,不仅两个(像t检验一样),而且可以进行多组比较等。

    线性混合模型背后的数学 (Math Behind Linear Mixed Model)

    Let us now try to derive a few LMM equations using our toy data set. We will again have a look at the 4 data points and make some mathematical notations accounting for treatment effects, β, which is nothing else than Fixed Effects, and the block-wise structure u due to points clustering within two individuals, which is actually the Random Effects contribution. We are going to express the Response (Resp) coordinates y in terms of β and u parameters.

    现在让我们尝试使用玩具数据集导出一些LMM方程 。 我们将再次看看4个数据点,并提出一些数学符号占治疗效果,β,只不过是固定效应一样 ,并逐块结构U由于分两个个体内成簇, 而这 实际上 随机效应贡献 。 我们将根据βu参数来表示响应(Resp)坐标y

    Image for post

    Here, β1 is the Response of the individuals in the Untreated state, while β2 is the Response on the Treatment. One can also say, that β1 is the mean of the untreated samples while β2 is the mean of the treated samples. The variables u1 and u2 are block variables accounting for effects specific to Individual #1 and Individual #2, respectively. Finally, ϵij ∼ N(0, σ²) is the Residual error, i.e. the error we can’t model and can only try to minimize it as the goal of the Maximum Likelihood optimization problem. Therefore, we can write down the Response variable y as a combination of parameters β, u, i.e. Fixed and Random Effects, and ϵ as Eq. (1). In a general form, this system of algebraic equations can be rewritten as Eq. (2), where the index i = 1, 2 corresponds to treatment effects and j = 1, 2 describes individual effects. We can also express this system of equations in the matrix form Eq. (3). Therefore we arrive to the following famous matrix form of LMM which is shown in all textbooks but not always properly explained, Eq. (4).

    在此, β1是未治疗状态的个体的React,而β2是对治疗React 。 也可以说, β1是未处理样品的平均值,而β2是已处理样品的平均值。 变量u1u2是块变量,分别说明特定于个人#1和个人#2的影响。 最后, ϵij〜N(0,σ²)残差误差 ,即我们无法建模的误差,只能将其最小化作为最大似然优化问题的目标。 因此,我们可以将Response变量y记参数 βu组合 ,即固定 效应随机效应 ,以及ϵ 如式 (1)。 通常,该代数方程组可以重写为等式。 (2),其中索引i = 1、2对应于治疗效果,而j = 1、2描述单个效果。 我们还可以以矩阵形式Eq表示此方程组。 (3)。 因此,我们得出以下著名的LMM矩阵形式,该形式在所有教科书中均已显示,但并非总是正确解释。 (4)。

    Image for post

    Here, X is called the design matrix and K is called the block matrix, it codes the relationship between the data points, i.e. whether they come from related individuals or even from the same individual like in our case. It is important to note that the treatment is modeled as a fixed effect because the levels treated-untreated exhaust all possible outcomes of treatment. In contrast, the block-wise structure of the data is modeled as a Random Effect since the individuals were sampled from population, and might not correctly represent the entire population of individuals. In other words, there is an error associated with the random effects, i.e. uj ∼ N(0, σs²), while fixed affects are assumed to be error free. For example, sex is usually modeled as a Fixed Effect because it is usually assumed to have only two levels (males, females), while batch-effects in Life Sciences should be modeled as Random Effects because potentially additional experimental protocols or labs would produce many more, that is many levels, systematic differences between samples that confound the data analysis. As a rule of thumb one could think that Fixed Effects should not have many levels, while Random Effects are typically multi-level categorical variables where the levels represent just a sample of all possibilities but not all of them.

    在这里, X称为设计矩阵K称为块矩阵 ,它对数据点之间的关系进行编码,即数据点是来自相关个人,还是像我们一样来自同一个人。 重要的是要注意,将治疗建模为固定效果,因为未经处理的水平会耗尽所有可能的治疗结果。 相比之下,数据的逐块结构被建模为随机效应,因为个体是从总体采样的 ,因此可能无法正确代表整个个体。 换句话说,存在与随机效应相关的误差,即uj〜N(0,σs²) ,而固定效应则假定为无误差。 例如,通常将性别建模为固定效应,因为通常假定性别只有两个级别(男性,女性) ,而生命科学中的批量效应应建模为随机效应,因为可能会通过其他实验方案或实验室产生很多效应更重要的是,样本之间存在许多层次上的系统差异,这混淆了数据分析。 根据经验,固定效应不应有多个级别,而随机效应通常是多级类别变量,其中级别仅代表所有可能性的样本,但并不代表所有可能性。

    Let us proceed with deriving the mean and the variance of the data points Y. Since both the Random Effect error and the Residual error come from Normal distribution with zero mean, while the non-zero component in E[Y] originates from the Fixed Effect, we can express the expected value of Y as Eq. (5). Next, the variance of the Fixed Effect term is zero, as Fixed Effects are assumed to be error free, therefore, for the variance of Y we obtain Eq. (6).

    让我们继续推导数据点 Y均值方差 由于随机效应误差和残余误差均来自均值为零的正态分布,而E [ Y ]中的非零分量源自固定效应,因此我们可以将Y的期望值表示为Eq。 (5)。 接下来,固定效果项的方差为零,因为固定效果被假定为无误差,因此,对于Y的方差,我们得到等式。 (6)。

    Image for post

    Eq. (6) was obtained by taking into account that var(Ku)=K*var(u)*K^T and var(ϵ) = σ²*I and var(u)= σ*I, where I is a 4 x 4 identity matrix. Here, σ² is the Residual variance (unmodeled/unreduced error), and σs² is the random Intercept effects (shared across data points) variance. The matrix in front of σs² is called the kinship matrix and is given by Eq. (7). The kinship matrix codes all relatedness between data points. For example, some data points may come from genetically related people, geographic locations in close proximity, some data points may come from technical replicates. Those relationships are coded in the kinship matrix. Thus, the variance-covariance matrix of the data points from Eq. (6) takes the ultimate form of Eq. (8). Once we have obtained the variance-covariance matrix, we can continue with optimization procedure of the Maximum Likelihood function that requires the variance-covariance.

    等式 (6)通过考虑到变种(K U)= K *变种(U)* K ^ T和VAR(ε)=σ²* I和变种(U)=σ²* I,其中I是获得一个4 x 4的单位矩阵 。 这里,σ是² 残余方差 (未建模/未还原的错误),以及强度σs²随机拦截效果(跨数据点共享)方差σs²前面的矩阵称为亲属矩阵 ,由等式给出。 (7)。 亲属关系矩阵编码数据点之间的所有相关性。 例如,某些数据点可能来自与遗传相关的人,地理位置非常接近,某些数据点可能来自技术复制。 这些关系在亲属关系矩阵中编码。 因此,来自等式的数据点的方差-协方差矩阵。 (6)采用等式的最终形式。 (8)。 一旦获得方差-协方差矩阵,我们就可以继续进行需要方差-协方差的最大似然函数的优化过程。

    LMM来自最大似然(ML)原理 (LMM from Maximum Likelihood (ML) Principle)

    Why did we spend so much time deriving the variance-covariance matrix and what does it have to do with the linear regression? It turns out that the whole concept of fitting a linear model, as well as many other if not all concepts of traditional Frequentist statistics, comes from the Maximum Likelihood (ML) principle. For this purpose, we need to maximize the Multivariate Gaussian distribution function with respect to parameters β1, β2, σs² and σ², Eq. (9).

    为什么我们要花费大量时间来得出方差-协方差矩阵,它与线性回归有什么关系? 事实证明, 拟合线性模型的整个概念以及传统频率统计的许多其他(如果不是全部)概念都来自最大似然(ML) 原理 。 为此,我们需要针对参数β1β2σs²σ² ,等式最大化多元高斯 分布函数 。 (9)。

    Image for post

    Here |Σy| denotes the determinant of the variance-covariance matrix. We see that the inverse matrix and determinant of the variance-covariance matrix are explicitly included into the Likelihood function, this is why we had to compute its expression via the random effects variance σs² and residual variance σ². Maximization of the Likelihood function is equivalent to minimization of the log-likelihood function, Eq. (10).

    在这里 ΣY | 表示方差-协方差矩阵的行列式。 我们看到方差-协方差矩阵的逆矩阵和行列式明确包含在似然函数中,这就是为什么我们必须通过随机效应方差 σs²残差方差 σ²计算其表达式的原因。 似然函数的最大化等效于对数似然函数 Eq的最小化 。 (10)。

    We will need to perform a tedious symbolic derivation of the determinant of the variance-covariance matrix, the inverse variance-covariance matrix and the product of the inverse variance-covariance matrix with Y terms. To my experience, this is hard to do in R / Python, however we can use Maple (or similarly Mathematica or Matlab) for making symbolic calculations, and derive the expressions for determinant and inverse of the variance-covariance matrix:

    我们将需要对方差-协方差矩阵, 逆方差-协方差矩阵具有Y - 项的 方差-协方差矩阵乘积执行繁琐的符号推导 。 以我的经验,这在R / Python中很难做到,但是我们可以使用Maple (或类似的 Mathematica Matlab )进行符号计算,并导出方差-协方差矩阵的行列式和逆式的表达式:

    Image for post
    Complex symbolic derivations become easy in Maple / Mathematica / Matlab environment
    在Maple / Mathematica / Matlab环境中,复杂的符号推导变得容易

    Using Maple we can obtain the determinant of the variance-covariance matrix as Eq. (11). Next, the last term in Eq. (10) for log-likelihood takes the form of Eq. (12).

    使用Maple,我们可以获得方差-协方差矩阵的行列式。 (11)。 接下来,等式中的最后一项。 (10)对数似然采用等式的形式。 (12)。

    Image for post

    Now we are ready to perform the numeric minimization of the log-likelihood function with respect to β1, β2, σs² and σ² using the optim R function:

    现在,我们准备使用优化 R函数对β1β2σs²σ²执行对数似然函数的数值最小化:

    Image for post

    We can see that the minimization algorithm has successfully converged since we got the “convergence = 0” message. In the output, σ=4.242640687 is the residual standard deviation, which exactly reproduces the result from lme and lmer (with REML = FALSE). By analogy, σs=5.766281297 is the shared standard deviation that again exactly reproduces the corresponding Random Effects Intercept outputs from lme and lmer (with REML = FALSE) functions. As expected, the Fixed Effect β1 = 6.5 is the mean of the untreated samples in agreement with the Intercept Fixed Effect Estimate from lmer and lme. Next, β2 = 15.5 is the mean of treated samples, which is the Intercept Fixed Effect Estimate (= 6.5) plus the Slope / Treat Fixed Effect Estimate (= 9) from lmer and lme R functions.

    我们可以看到,自从收到“ convergence = 0”消息以来,最小化算法已成功收敛 。 在输出中, σ= 4.242640687残余标准偏差 ,它精确地再现了 lme和lmer 的结果 (REML = FALSE)。 以此类推, σs= 5.766281297共享的标准偏差 ,该偏差再次精确地再现了来自lme和lmer(具有REML = FALSE)功能的相应的“随机效果拦截”输出。 正如预期的那样,固定效应β1= 6.5的平均值与固定效应估算和11聚物伦敦金属交易所拦截协议未经处理的样品。 接下来, β2= 15.5是已处理样品的平均值,它是截距固定效果估计值(= 6.5)加上来自lmer和lme R函数的斜率/处理固定效果估计值(= 9)。

    Fantastic job! We have successfully reproduced the Fixed Effects and Random Effects outputs from lmer / lme functions by deriving and coding the Linear Mixed Model (LMM) from scratch!

    很棒的工作! 通过从头开始推导和编码线性混合模型(LMM),我们已经成功地从lmer / lme函数复制了固定效果和随机效果输出!

    摘要 (Summary)

    In this article, we have learnt how to derive and code a Linear Mixed Model (LMM) with Fixed and Random Effects on a toy data set. We covered the relation between LMM and the paired t-test, and reproduced the Fixed and Random Effects parameters from lmer and lme R functions.

    在本文中,我们学习了如何 玩具数据集 导出具有 固定和随机效应线性混合模型(LMM)并进行编码。 我们介绍了LMM和配对t检验之间的关系,并从lmerlme R函数复制了“固定效应”和“随机效应”参数。

    In the comments below, let me know which analytical techniques from Life Sciences seem especially mysterious to you and I will try to cover them in the future posts. Check the codes from the post on my Github. Follow me at Medium Nikolay Oskolkov, in Twitter @NikolayOskolkov and do connect in Linkedin. In the next post, we are going to cover the difference between the Maximum Likelihood (ML) and Restricted Maximum Likelihood (REML) approaches, stay tuned.

    在下面的评论中,让我知道生命科学的哪些分析技术对您来说似乎特别神秘 ,我将在以后的文章中尝试介绍它们。 在我的Github上检查帖子中的代码。 跟随我在中型Nikolay Oskolkov,在Twitter @NikolayOskolkov上进行连接,并在Linkedin中进行连接。 在下一篇文章中,我们将介绍 密切关注最大可能性(ML)和限制最大可能性(REML)方法之间的差异。

    翻译自: https://towardsdatascience.com/linear-mixed-model-from-scratch-f29b2e45f0a4

    一般线性模型和混合线性模型

    展开全文
  • 我对Statsmodels Mixedlm的输出感到有点困惑,我希望有人可以解释一下.我有一个大型的单户住宅数据集...我使用statsmodels混合线性模型来回归价格升值的高程,保持其他一些因素不变,以城市作为我的团体类别.md = smf.m...

    我对Statsmodels Mixedlm的输出感到有点困惑,我希望有人可以解释一下.

    我有一个大型的单户住宅数据集,包括每个房产的前两个销售价格/销售日期.我对整个数据集进行了地理编码,并获取了每个属性的高程.我试图了解不同城市之间提升与房地产价格升值之间关系的变化方式.

    我使用statsmodels混合线性模型来回归价格升值的高程,保持其他一些因素不变,以城市作为我的团体类别.md = smf.mixedlm('price_relative_ind~Elevation YearBuilt Sale_Amount_1 LivingSqFt',data=Miami_SF,groups=Miami_SF['City'])

    mdf = md.fit()

    mdf.random_effects

    输入mdf.random_effects将返回系数列表.我能否将此列表解释为每个城市的斜率(即,与销售价格升值相关的个别回归系数)?或者这些结果是每个城市的拦截?

    解决方法:

    我目前正试图在MixedLM中了解随机效应.看看the docs,似乎只使用groups参数,没有exog_re或re_formula只会为每个组添加一个随机拦截.来自文档的一个例子:# A basic mixed model with fixed effects for the columns of exog and a random intercept for each distinct value of group:

    model = sm.MixedLM(endog, exog, groups)

    result = model.fit()

    因此,在这种情况下,您会期望random_effects方法返回城市的截距,而不是系数/斜率.

    要为您的其他功能添加随机斜率,您可以从statsmodels的Jupyter教程中执行与此示例类似的操作,可以使用斜率和截距:model = sm.MixedLM.from_formula(

    "Y ~ X", data, re_formula="X", groups=data["C"])

    或只有斜坡:model = sm.MixedLM.from_formula(

    "Y ~ X", data, re_formula="0 X", groups=data["C"])

    查看random_effects的文档,它表示它返回每个组的随机效果的均值.然而,由于随机效应仅仅是由于截距,这应该等于截距本身.MixedLMResults.random_effects()[source]

    The conditional means of random effects given the data.

    Returns:

    random_effects : dict

    A dictionary mapping the distinct group values to the means of the random effects for the group.

    一些有用的资源,包括:

    > Docs为MixedML的公式版本

    > Docs为MixedML的结果

    > This Jupyter笔记本以及使用MixedML(Python)的示例

    > Stanford tutorial混合型号(R)

    > Tutorial关于固定和随机效应(R)来源:https://www.icode9.com/content-1-495501.html

    展开全文
  • 分块混合线性模型的参数估计,李娜,,对于分块混合线性模型,研究了固定效应的参数估计。证明了在一定条件下,原模型中参数 的LS估计与投影模型下参数 的BLUE相等。
  • 线性混合模型简介混合线性模型(mixed linear model)是一种方差分量模型。在方差分量模型中,把既含有固定效应,又含有随机效应的模型,称为混合线性模型【信息来源:百度】一般线性模型中仅包含固定效应和噪声两项...

    线性混合模型简介

    混合线性模型(mixed linear model)是一种方差分量模型。在方差分量模型中,把既含有固定效应,又含有随机效应的模型,称为混合线性模型【信息来源:百度】一般线性模型中仅包含固定效应和噪声两项影响因素,也就是不会考虑随机因素对模型的影响。数据集:R的MASS包中oast数据集3bfac38f268ae0d21a14ec698ae7435c.png数据解释通过三个品种四个水平的施肥处理,对燕麦进行了小区试验。实验分为6个区块,每个区块分为4个子区块。主小区采用品种,次小区采用施肥处理。线性模型是需要求出关于在因变量上的权重(斜率),那么针对分类型的因变量要怎么计算呢?——创建虚拟变量代替我们的分类水平进行运算。

    虚拟变量

    因变量属于分类变量,所以在建立模型之前需要创建虚拟变量:

    虚拟变量的作用就是对固定因子的因子进行量化。例如:在varieties中有三种不同的品种分别为:Golden.rain、Marvellous、Victory,可以通过R的contrasts()函数查看或设置不同品种的虚拟变量。

    d715c5096f058d84f5b281632732746f.png

    默认采用treatment编码方式,每一列的求和等于1,至于为什么Golden.rain该行全都是0,因为默认情况下,第一行是其他组的参照组。注:若是采用sum编码方式:每一列的求和等于0。

    展示一下treatment编码方式和sum编码方式。注:contr.sum是定义无序因子的,若为有序可以采用contr.poly来定义。

    例如:在3水平的因子变量上展示

    89957fce5240ea74d74bd0b4b69b14eb.png

    同理nitrogen变量的虚拟变量编码方式如下:

    d715c5096f058d84f5b281632732746f.png3292aa93a1d36aff69e77e47d6a328f0.png

    观察两张图片可以得到关于虚拟变量创建的规律:

    1、对于某一个因子型变量包含K个水平的因子,那么会产生K-1个对比向量;

    2、对比向量以一个K行*(K-1)列的矩阵组成。

    3、默认的情况下以第一行作为其他因子的参照。(比如:Golden.rain,0.0cwt)

    那么有了虚拟变量后要如何计算呢?

    1、单因素多水平(treatment编码方式):

    运算公式:y=b+wx+beta ;其中,b是截距,w是权重(斜率),beta(随机误差)

    Y = b+w1*x1+w2*x1+beta

    A水平:YA = b+w1*0+w2+0+beta = b+beta

    B水平:YB = b+w1*1+w2*0+beta = b+w1+beta

    C水平:YC = b+w1*0+w2*1+beta = b+w2+beta

    那么模型的回归系数就等于:

    W1=YB — YA

    W2=YC — YA

    结论:回归系数等于真实效应差异。

    2、两因素两水平有交互作用的情况(sum编码方式):

    Y=b+w1*Xa+w2*Xb+w3*Xa*Xb+beta

          542e1462f9132f51cc4d71cf73f81283.png

    主效应的回归系数:

    Ya1=b-w1-w2+beta

    Ya2=b+w1-w2+beta

    W1=(Ya2-Ya1)/2

    同理:

    W2=(Yb2-Yb1)/2

    结论:主效应的回归系数等于真实效应的一半。

    交互效应的回归系数:

    Ya1b1=b-w1-w2+w3+beta

    Ya1b2=b-w1+w2-w3+beta

    Ya2b1=b+w1-w2-w3+beta

    Ya2b2=b+w1+w2+w3+beta

    W3=((Ya2b2-Ya2b1)-(Ya1b2-Ya1b1))/4

    结论:交互效应等于真实交互效应的四分之一

    你们也可以自己去尝试使用treatment编码方式,也就是将上面的Xa、Xb中的-1改为0。若不考虑交互作用的话,将公式中的w3*Xa*Xb剔除,再运算即可。

    treatment编码方式做两因素两水平无交互作用时,其主效应的回归系数是等于真实效应的。若有交互作用时,其主效应的回归系数是不等于真实效应的,但交互作用的回归系数等于真实效应。

    那么对于本案例中属于双因素多水平的怎么计算回归系数呢?

    3、K因素N水平的情况下,如何保证回归系数等于真实的效应值?

    R里面是没有这个函数的,可以自己设置编码。即每个因素产生N*(N-1)的矩阵,其中每一列代表一个比较,每一行代表一个水平。第一个水平一般是作为为参照组,其编码为-(1/N),每一列对应基线之外的某个水平与基线的对比,所在的行编码为1-(1/N),其他行编码为-(1/N)。你们可以带入公式去验证一下。

    DV   m   for(i in 1:n){    for(j in 1:n-1){      if(i-1 == j)        m[i,j]       else        m[i,j]     }  }  return(m)} DV(3)           [,1]       [,2][1,] -0.3333333 -0.3333333[2,]  0.6666667 -0.3333333[3,] -0.3333333  0.6666667DV(4)      [,1]  [,2]  [,3][1,] -0.25 -0.25 -0.25[2,]  0.75 -0.25 -0.25[3,] -0.25  0.75 -0.25[4,] -0.25 -0.25  0.75#将虚拟变量赋值给案例中固定因子contrasts(oats$varieties) contrasts(oats$nitrogen) contrasts(oats$varieties)                  [,1]       [,2]Golden.rain -0.3333333 -0.3333333Marvellous   0.6666667 -0.3333333Victory     -0.3333333  0.6666667contrasts(oats$nitrogen)        [,1]  [,2]  [,3]0.0cwt -0.25 -0.25 -0.250.2cwt  0.75 -0.25 -0.250.4cwt -0.25  0.75 -0.250.6cwt -0.25 -0.25  0.75

    其中我们将blocks变量作为被试,那么我们还需要增加一个项目变量,因为不同的项目之间的差异也是需要考虑的。本例考虑每一个被试在每个条件下有12次观测。

    #添加项目变量item for(i in 1:6){  item }oats 

    构建模型

    建立混合线性模型:使用lme4包中的lmer函数
    lmer(data = , formula = y ~ Fixed_Factor + (Random_intercept + Random_Slope | Random_Factor))
    参数说明:data:数据集formula:R中的公式y:因变量Fixed_Factor:固定因子或自变量Random_intercept:随机截距。不同个体因变量分布不同。例如:有的被试反应程度比较快,有的比较慢。模型中一般是使用1表示,如果说不需药随机截距的话将1改为-1即可。Random_Slope:随机斜率。不同个体自变量与因变量关系不同。例如:有的被试对噪声有敏感,同时有的会不敏感。Random_Factor:随机因子。例如:两种实验中的被试、项目等等。

    8b8836f9ff70bf5b449dcd2bd6da8d8e.png

    这里出现报错了,也就是随机效应的个数(144)大于样本数(72)。所以在建立模型过程,将过多的随机效应增加到模型中,当随机效应的个数大于样本数时会报错,这时需适当的调整、取消一些随机效应或者增加样本数。

    题外话

    * 表示即考虑固定因子的效应,也考虑其交互效应

    + 表示只考虑固定因子的效应,

    :表示只考虑其交互效应

    如果说对formula这个参数的设置不清楚,可以去看线性回归部分。点击转换

    为了能继续下去,就需要对随机效应进行虚拟变量编码,有选择性的对随机效应选取。【:模型会随着加入的随机效应的不同而不同。这里仅为了展示所以选择随机效应时比较随意,真实中需重复试验考察。】

    展示一下编码后的数据(由于数据太多了,仅展示数据结构)

    a1389391119c37233404df5add602659.png

    由原本的4列变成了17列,增加13列。包括item(项目变量)以及从(intercept)开始后面的12项随机效应的。

    构建虚拟变量具体代码如下:

    mm head(mm)data View(data)

    同时为了方便,就建立一个零模型(不考虑协方差的作用),将被试和项目前面的竖线改为两条即可。以下为更改后的模型。

    oats.lmer = lmer(yields ~ varieties*nitrogen +                   (1+varieties1+varieties2+nitrogen1+nitrogen2+nitrogen3+varieties1:nitrogen1+varieties2:nitrogen1+varieties1:nitrogen2+varieties2:nitrogen2+varieties1:nitrogen3+varieties2:nitrogen3 || blocks)+                   (1+varieties1+varieties2+nitrogen1+nitrogen2+nitrogen3+varieties1:nitrogen1+varieties2:nitrogen1+varieties1:nitrogen2+varieties2:nitrogen2+varieties1:nitrogen3+varieties2:nitrogen3 || item),                 data = data)

    更改后的模型,尽管模型没有报错,但是还是出现警告了。这个警告的意思就是模型中出现了畸形协方差矩阵。一般情况下零是不会出现畸形协方差矩阵的,因为零模型不考虑协方差的作用。尽管有警告也没影响。

    畸形协方差是什么意思呢?我列举一个简单的例子(如下所示的模型):

    29858065b55628d404144479f74386be.png

    该模型也出现了畸形协方差矩阵。通过去观察随机效应可以得到为什么会存在警告。

    24b7e1d2da478dd38045f47c53c58310.png

    在summary详细信息里面的randomeffects(随机效应),最右边的corr就表示协方差,可以看到有一些相关性是非常强的,有的还等于1了,这就是协方差矩阵了。

    如何判断模型是否存在畸形协方差?

    通过函数isSingular()检查,若返回为TRUE,则说明模型存在畸形协方差。

    isSingular(oats.lmer)[1] TRUE

    :模型有时候可能还会存在一种警告:模型会出现无法收敛的情况,也是通过调整或者取消一部分的随机效应。

    回归正题

    那么回到我们的零模型当中,当出这些问题的时候我们如何去解决或者优化模型呢?

    可通过主成分分析,确认有效的随机效应的成分数量,也就是剔除哪些对模型没有影响的随机效应成分。

    3424add0dc0cb69df85fbf93812aa963.png

    item随机因子上的主成分效应,第二行表示解释一个随机截距和十一个随机斜率的效应的方差,从第4个主成分开始能解释的比例为0了,因此在item随机因子上有9个随机效应时多余的。虽然后面的4和5仍有贡献,但是很小就不考虑进来了,选择剔除了,但是具体哪些是多余的,目前还不清楚。同理在blocks随机因子上,第9个主成分开始能够解释的比例为0了,所以在blocks随机因子上有4个随机效应是多余的。在本次建模剔除item随机因子上的9个随机效应和blocks随机因子上的4个随机效应。

    怎么确定需要剔除的随机效应呢?

    通过观察随机效应的标准差有两种方式:

    一:通过查看原始模型中的Random effects中的Std.Dev

    二:通过函数VarCorr()

    两者的结果是一样的。

    20fb5a9ff8147ff4b407f6ea738f7612.png

    通过前面的主成分分析在item随机因子上剔除9个随机效应,也就是剔除标准差比较小的随机效应。同理在bolcks随机因子上剔除4个随机效应。

    重新建模以后发现仍存在畸形协方差,再次采用主成分优化,经过N次以后得到最终的模型即:仅考虑blocks随机因子下的随机截距,item随机因子下的品种2和施肥2的交互与品种2和施肥3的交互随机效应。具体模型如下:

    oats.lmer1 = lmer(yields ~ varieties*nitrogen +                  (1| blocks)+                  (-1+varieties2:nitrogen2+varieties2:nitrogen3 || item),                 data = data)

    尽管模型的没有报错和警告,但是模型的效果并不是很好吧!(可以尝试将其他随机效应放入模型中)

    模型解读

    固定效应是通过summary(model)查看,其有四个部分:

    一:Scaled residuals:包含标准差的最小值、中位数、四分位数。

    二:Random effects:随机效应部分

    三:Fixed effects:固定效应部分

    四:Correlation of Fixed Effects:固定效应的协方差。

    de026ce9be1b6872478db2cb993454ad.png

    固定效应的结果如上图所示,这里是将varieties=Golden.rain和nitrogen=0.0cwt作为参照。可以看出nitrogen1、nitrogen2、nitrogen3(0.2cwt/0.4cwt/0.6cwt)其在参数估计(Estimate)要显著高于0.0cwt)。在品种varieties上和两者的交互作用上却没有显著差异。其余部分可自行去跑一下,这里由于篇幅问题就不展示了。

    主效应和交互作用是通过anova(model)查看:

    > anova(oats.lmer1)Type III Analysis of Variance Table with Satterthwaite's method                    Sum Sq Mean Sq NumDF  DenDF F value    Pr(>F)    varieties           1530.6   765.3     2 51.918  3.5552   0.03572 *  nitrogen           18146.0  6048.7     3 50.194 28.0982 8.313e-11 ***varieties:nitrogen   277.5    46.2     6 38.619  0.2148   0.96984    ---Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

    结果如上,可看出燕麦的品种和施肥处理上的主效应显著,但其交互作用都不显著。

    最后我们通过方差分析判断两个模型是否存在差异

    a356e1105ce669f36697cec046b83711.png

    方差分析的P值等于0.8,也就是模型间不存在显著性差异。即两个模型是一样的。

    事后检验

    模型的主效应显著后需要进行事后比较

    由于我们模型的交互作用并不显著仅是主效应显著,那么对主效应进行多重比较。首先对品种(varieties)做多重比较。结果如下:

    > emmeans(oats.lmer1,pairwise~varieties,adjust="none")$emmeans varieties   emmean   SE   df lower.CL upper.CL Golden.rain  104.4 7.03 6.53     87.5      121 Marvellous   109.4 7.00 6.47     92.6      126 Victory       97.4 7.22 6.93     80.3      115Results are averaged over the levels of: nitrogen Degrees-of-freedom method: kenward-roger Confidence level used: 0.95 $contrasts contrast                 estimate   SE   df t.ratio p.value Golden.rain - Marvellous    -4.98 4.50 49.6 -1.106  0.2742  Golden.rain - Victory        6.98 4.77 54.9  1.462  0.1495  Marvellous - Victory        11.96 4.67 50.2  2.562  0.0134 Results are averaged over the levels of: nitrogen Degrees-of-freedom method: kenward-roger

    contrasts返回的就是多重比较的结果。可以看到只有品种(marvellous)与品种(victory)是存在显著差异的。也可以通过emmeans返回的结果中可以看到品种(marvellous)燕麦的产量均值是109.4而品种(victory)的产量均值为97.4。

    我们再来看看施肥处理(nitrogen)的多重比较。结果如下:

    > emmeans(oats.lmer1,pairwise~nitrogen,adjust="none")$emmeans nitrogen emmean   SE   df lower.CL upper.CL 0.0cwt     79.3 7.26 7.37     62.4     96.3 0.2cwt     99.1 7.23 7.31     82.1    116.0 0.4cwt    113.3 7.35 7.61     96.2    130.4 0.6cwt    123.3 7.45 7.81    106.0    140.5Results are averaged over the levels of: varieties Degrees-of-freedom method: kenward-roger Confidence level used: 0.95 $contrasts contrast        estimate   SE   df t.ratio p.value 0.0cwt - 0.2cwt    -19.7 5.17 49.1 -3.819  0.0004  0.0cwt - 0.4cwt    -33.9 5.26 50.6 -6.452  <.0001> 0.0cwt - 0.6cwt    -43.9 5.29 50.4 -8.314  <.0001> 0.2cwt - 0.4cwt    -14.2 5.30 47.3 -2.680  0.0101  0.2cwt - 0.6cwt    -24.2 5.43 45.7 -4.461  0.0001  0.4cwt - 0.6cwt    -10.0 5.54 54.3 -1.807  0.0763 Results are averaged over the levels of: varieties Degrees-of-freedom method: kenward-roger

    同理contrasts返回的是施肥处理的多重比较结果。从上面可以看到除了0.4cwt与0.6cwt不存在显著差异外,其余的均存在显著性差异。

    总结

    一般来说做事后检验时,首先看交互作用,交互作用显著的话,就需要进行简单分析,简单分析中主效应显著了,再做多重比较。若交互作用不显著,那就直接对主效应做多重比较。也就是说,当你交互作用显著时,那么单纯做主效应分析的意义不大了,总之,交互作用的显著性优先考虑。(虽然该模型的交互作用不显著,但是还是走一遍流程吧!)

    例子:

    > #简单效应分析:比较不同品种上施肥处理的主效应> joint_tests(oats.lmer1,by="varieties")varieties = Golden.rain: model term df1   df2 F.ratio p.value nitrogen     3 40.31   9.431 0.0001 varieties = Marvellous: model term df1   df2 F.ratio p.value nitrogen     3 42.20   7.063 0.0006 varieties = Victory: model term df1   df2 F.ratio p.value nitrogen     3 14.94   8.103 0.0019

    结果:varieties = Golden.rain时,施肥处理的主效应是显著的;同时其他两种的品种在施肥处理上的主效应是显著的。

    那么我的主效应显著以后就需要做多重比较了。

    > emmeans(oats.lmer1,pairwise~nitrogen|varieties,adjust="none")$emmeansvarieties = Golden.rain: nitrogen emmean    SE   df lower.CL upper.CL 0.0cwt     80.2  8.80 14.9     61.5     99.0 0.2cwt     98.2  8.76 14.8     79.5    116.8 0.4cwt    113.1  9.27 16.6     93.5    132.7 0.6cwt    126.2  9.19 16.4    106.7    145.6varieties = Marvellous: nitrogen emmean    SE   df lower.CL upper.CL 0.0cwt     86.4  8.79 15.0     67.7    105.1 0.2cwt    108.6  8.76 14.8     89.9    127.3 0.4cwt    117.1  9.02 16.1     98.0    136.2 0.6cwt    125.5  9.16 16.4    106.1    144.9varieties = Victory: nitrogen emmean    SE   df lower.CL upper.CL 0.0cwt     71.4  9.12 16.6     52.1     90.7 0.2cwt     90.5  9.26 17.0     71.0    110.0 0.4cwt    109.7 10.03 14.0     88.1    131.2 0.6cwt    118.2 10.55 13.0     95.4    141.0Degrees-of-freedom method: kenward-roger Confidence level used: 0.95 $contrastsvarieties = Golden.rain: contrast        estimate    SE   df t.ratio p.value 0.0cwt - 0.2cwt   -17.94  8.57 40.3 -2.093  0.0427  0.0cwt - 0.4cwt   -32.88  9.21 43.9 -3.571  0.0009  0.0cwt - 0.6cwt   -45.96  9.18 43.4 -5.007  <.0001> 0.2cwt - 0.4cwt   -14.94  9.18 44.0 -1.628  0.1107  0.2cwt - 0.6cwt   -28.01  9.17 42.4 -3.055  0.0039  0.4cwt - 0.6cwt   -13.07  9.82 45.7 -1.331  0.1898 varieties = Marvellous: contrast        estimate    SE   df t.ratio p.value 0.0cwt - 0.2cwt   -22.17  8.65 42.2 -2.563  0.0140  0.0cwt - 0.4cwt   -30.69  8.97 46.8 -3.420  0.0013  0.0cwt - 0.6cwt   -39.11  9.14 44.8 -4.279  0.0001  0.2cwt - 0.4cwt    -8.52  8.98 46.2 -0.949  0.3476  0.2cwt - 0.6cwt   -16.94  9.09 45.8 -1.864  0.0687  0.4cwt - 0.6cwt    -8.43  9.55 51.1 -0.882  0.3817 varieties = Victory: contrast        estimate    SE   df t.ratio p.value 0.0cwt - 0.2cwt   -19.10  9.49 54.5 -2.012  0.0492  0.0cwt - 0.4cwt   -38.24 10.35 16.4 -3.693  0.0019  0.0cwt - 0.6cwt   -46.77 10.74 14.9 -4.354  0.0006  0.2cwt - 0.4cwt   -19.15 10.75 12.2 -1.780  0.0998  0.2cwt - 0.6cwt   -27.67 10.98 13.1 -2.521  0.0255  0.4cwt - 0.6cwt    -8.52 11.88 12.7 -0.717  0.4861 Degrees-of-freedom method: kenward-roger

    式子:pairwise~nitrogen|varieties,其中:竖线前是预测变量,竖线后调节变量,如果实在不理解的话,你就理解是一个分式,竖线前是分子,竖线后是分母。当你要比较分子间的差异时,是不是要保证分母是相同的,也就是说,我要比较不同的施肥处理在燕麦产量上的差异时,我要保证的是施肥处理应作用在同一个品种上才可以进行比较?(在相同的品种上,不同施肥处理的方式间的差异是否显著)

    结果还是看contrasts部分。例如varieties(品种) = Golden.rain上,施肥处理为0.2cwt - 0.4cwt和0.4cwt - 0.6cwt之间是不存在显著性(p值小于0.05)差异的,其他同理。

    反之,如果你的简单主效应分析的是:不同施肥处理上品种的主效应,若显著后,需要做多重比较(在相同的施肥处理方式上,不同品种间的差异是否显著)

    【注:多重比较有很多比较的方法】如下:

    > p.adjust.methods[1] "holm"       "hochberg"   "hommel"     "bonferroni" "BH"         "BY"         "fdr"       [8] "none"

    拓展

    一:混合线性模型不仅可以使用lme4包中lmer()函数,也可以使用nlme包中lme()函数,这两个包语法都差不多,相对来说lme4这个包的运算速度会快一点。还有一个ASReml-R包中asreml()函数。

    二:如果因变量是分类变量,则需要用广义线性模型。

    glmer(data = , formula = , family = ,...)其中,data就是数据集,formula和上面是一样的,family和逻辑回归中是一样的。

    代码

    #释放内存rm(list=ls())gc()#加载数据:采用R的MASS包中的oats数据集library(MASS)data(oats)names(oats) = c('blocks', 'varieties', 'nitrogen', 'yields')str(oats)#虚拟变量编码方式library(lme4)contrasts(oats$varieties)contr.treatment(3)contr.sum(3)contr.poly(3)contrasts(oats$nitrogen)table(oats$varieties)table(oats$nitrogen)DV function(n){  m 1)  for(i in 1:n){    for(j in 1:n-1){      if(i-1 == j)        m[i,j] 1-(      else        m[i,j] 1/n)    }  }  return(m)} DV(2)#重置品种和施肥处理的虚拟变量编码contrasts(oats$varieties) 3)contrasts(oats$nitrogen) 4)#添加项目变量item for(i in 1:6){  item 1:}oats #创建虚拟变量mm View(mm)data View(data)str(oats)str(data)#建模oats.lmer_full = lmer(yields ~ varieties*nitrogen +                   (1+varieties*nitrogen | blocks)+                   (1+varieties*nitrogen | item),                 data = oats)#模型优化oats.lmer = lmer(yields ~ varieties*nitrogen +                   (1+varieties1+varieties2+nitrogen1+nitrogen2+nitrogen3+varieties1:nitrogen1+varieties2:nitrogen1+varieties1:nitrogen2+varieties2:nitrogen2+varieties1:nitrogen3+varieties2:nitrogen3 || blocks)+                   (1+varieties1+varieties2+nitrogen1+nitrogen2+nitrogen3+varieties1:nitrogen1+varieties2:nitrogen1+varieties1:nitrogen2+varieties2:nitrogen2+varieties1:nitrogen3+varieties2:nitrogen3 || item),                 data = data)summary(oats.lmer)isSingular(oats.lmer)summary(rePCA(oats.lmer))VarCorr(oats.lmer)#最终模型oats.lmer1 = lmer(yields ~ varieties*nitrogen +                  (1| blocks)+                  (-1+varieties2:nitrogen2+varieties2:nitrogen3 || item),                 data = data)summary(oats.lmer1)anova(oats.lmer1)anova(oats.lmer,oats.lmer1)#主效应事后比较library(emmeans)p.adjust.methodsemmeans(oats.lmer1,pairwise~varieties,adjust="none")emmeans(oats.lmer1,pairwise~nitrogen,adjust="none")#简单效应分析:比较不同品种上施肥处理的主效应joint_tests(oats.lmer1,by="varieties")emmeans(oats.lmer1,pairwise~nitrogen|varieties,adjust="none")

    53c520cf14bdad921d8006caa72e5e49.png

    感谢您的支持

    展开全文
  • 1. 题目: 混合线性模型理论1 2. 大纲 3. 一般线性模型
  • 如何实现混合线性模型?1 基本表达式2 数据整理形式3 结果查看4 随机斜率的取舍5 调整固定因子比较基线6 简单效应分析7 Planed contrasts8 广义混合线性模型9 REML 和 ML Hello, 这里是行上行下,我是张光耀~ ...
  • 混合线性模型介绍--Wiki

    千次阅读 2018-08-01 21:04:37
    混合线性模型:是即包括固定因子,又包括随机因子的模型。 混合线性模型被广泛应用于物理、生物和社会科学。尤其是一些重复测量的数据及面板数据。混合线性模型比较突出的特点是可以非常优秀的处理缺失值,相对于...
  • R语言中混合线性模型的实现以及参数解析

    万次阅读 热门讨论 2019-04-10 20:09:39
    为什么要用混合线性模型:比如测量了不同收入水平的人群的收入和幸福感,但每个群体内收入水平是不同的,幸福感也不同,两者之间的关系也是不同的, 如果直接用一般线性模型,会造成错误的结论,这个时候要考察的是...
  • 线性混合模型简介混合线性模型(mixed linear model)是一种方差分量模型。在方差分量模型中,把既含有固定效应,又含有随机效应的模型,称为混合线性模型【信息来源:百度】一般线性模型中仅包含固定效应和噪声两项...
  • 1. 课程来源: ... 需要安装的R包 install.packages(c('lmerTest', 'lsmeans', 'car', 'multcomp', 'ggplot2', 'knitr')) ...利用混合线性模型在农业,食品科学,生物学,医学和技术科学中的应用,获得有关数据统计分析...
  • 1、线性回归模型(有PPT) 适用于自变量X和因变量Y为线性关系,具体来说,画出散点图可以用一条直线来近似拟合。 模型可以表达为:{y=Xβ+εε∼MVN(0,σ2In),其中ε是随机误差,MVN为多元正态分布。 模型有几个...
  • 提出了一种基于混合线性模型的模板跟踪方法,提取目标的运动参数和表观特征,建立数据集,利用全监督学习的方法计算出两者的映射关系,从而实现对目标的有效跟踪。这种方法既克服了由单一线性模型造成的非线性误差,...
  • 基于R的混合线性模型的实现

    万次阅读 多人点赞 2018-12-24 11:46:00
    作者:张光耀,硕士研究生,现就读于中科院心理所,GitHub 主页: https://github.com/usplos前言为什么要用混合线性模型:比如测量了不同收入水平...
  • 混合线性模型(linear mixed models)

    千次阅读 2017-08-25 21:35:00
    一般线性模型、混合线性模型、广义线性模型 广义线性模型GLM很简单,举个例子,药物的疗效和服用药物的剂量有关。这个相关性可能是多种多样的,可能是简单线性关系(发烧时吃一片药退烧0.1度,两片药退烧0.2度,...
  • http://bbs.pinggu.org/thread-2996069-1-1.html
  • 1.混合线性模型简介混合线性模型,又名多层线性模型(Hierarchical linear model)。它比较适合处理嵌套设计(nested)的实验和调查研究数据。此外,它还特别适合处理带有被试内变量的实验和调查数据,因为该模型不需要...
  • 本文是对下文的函数创作了可视化界面。混合线性模型中的模型遍历:https://zhuanlan.zhihu.com/p/67048151在RStudio中执行下方代码,即...
  • 这篇文档,是为那些想了解混合线性模型的人准备。 某些部分适合于应用学科中任何人,而其知识不多于标准回归的知识(例如,开始,模型1-2,结束),而其他部分则假定您熟悉矩阵符号和/或其他相对高级的知识。 希望...
  • Hello,这里是行上行下,我是张光耀~本文最早发布在本人的GitHub上,后来在R语言...R中混合线性模型可依靠lme4或者lmerTest数据包(强烈推荐后者,因为会输出显著性)library(lmerTest)基本表达式fit = lmer(data =...
  • 混合线性模型 y=Xb+Zu+e y = Xb + Zu + e y=Xb+Zu+e [ue]∼N([00],[G(σg)00R(σγ)]) \begin{bmatrix} u\\e \end{bmatrix} \sim N (\begin{bmatrix}0\\0\end{bmatrix}, \begin{bmatrix} G(\sigma_g) &am...
  • 一些有用的资源,包括: > Docs为MixedML的公式版本 > Docs为MixedML的结果 > This Jupyter笔记本以及使用MixedML(Python)的示例 > Stanford tutorial混合型号(R) > Tutorial关于固定和随机效应(R)
  • 眼动数据分析中,往往随机因子的个数不唯一,固定因子的个数也不唯一,因此存在若干个混合线性模型。 本文介绍作者自编的函数,作用是根据随机因子和固定因子,较快地遍历所有模型,并获取这些模型的AIC / BIC值,...
  • 我在玩这个代码,这是一元线性混合效应建模。数据集表示:学生作为s讲师(d)部门为部门服务即服务在R的lme4包语法中(Bates等人,2015),实现的模型可以概括为:y ~ 1 + (1|students) + (1|instructor) + (1|dept) + ...
  • 微信公众号:医学统计与R语言简介多层线性模型(Multilevel Linear Model,HLM)是针对大规模的社会调查、经济研究领域中广泛存在的“嵌套”和“分层”结构数据而发展起来的一种新型统计分析技术,与传统统计方法相比...
  • 本文最早发布在本人的GitHub上,后来在R语言中文社区的公共号上...R 中混合线性模型可依靠lme4或者lmerTest数据包(强烈推荐后者,因为会输出显著性)library(lmerTest)基本表达式fit = lmer(data = , formula = DV...

空空如也

空空如也

1 2 3 4 5 ... 20
收藏数 1,552
精华内容 620
关键字:

混合线性模型