博客
关于我
用线性回归计算缺失值
阅读量:352 次
发布时间:2019-03-04

本文共 2672 字,大约阅读时间需要 8 分钟。

  • Missing data

    Missing data can grocely be classified into three types:

    1. MCAR(Missing Completely At Random), which means that there is nothing systematic about why some date is missing. That is, there is no relationship between the fact that data is missing and either the observed or unobserved covariates.
    2. MAR(Missing At Random), resembles MCAR because there still is an element of randomness.
    3. MNAR(Missing Not At Random), implies that the fact that fata is missing is directly correlated with the value of the misssing data.
  • How to deal with missing data

    1. Just delete missing entries
    2. Replaceing missing values with the mean or median
    3. Linear Regression

      First, several predictors of the variable with missing values are identified using a correlation matrix. The best predictors are selected and used as independent variables in a regression equation.

      The variable with missing data is used as the dependent variable.

      Second, cases with complete data for the predictor variables are used to generate the regression equation;

      Third, the equation is then used to predict missing values for incomplete cases in an iterative process.

      以上是单变量线性回归

    4. 多元线性回归

      Linear regression has signigicant limits like:

      • It can’t easily match any data set that is non-linear
      • It can only be used to make predictions that fit within the range of the training data set
      • It can only be fit to data sets with a single dependent variables and a single independent variable

      This is where multiple regression comes in. It is specifically designed to create regressions on models with a single dependent variable and multiple independent variables.

      Equation for multiple regpression takes the form:

      y = b 1 ∗ x 1 + b 2 ∗ x 2 + . . . + b n ∗ x n + a y=b_1*x_1+b_2*x_2+...+b_n*x_n+a y=b1x1+b2x2+...+bnxn+a
      b i b_i bi coefficients;

      x i x_i xi independent variables; also called predictor variables

      y i y_i yi dependent vairables; also called criterion variable

      a a a a constant stating the value of the depnedent variable;

      How to fit a multiple regression model ?

      Similarly to minimized the sum of squared errors to find B in the linear regression, we minimize the sum of squared errors to find all the B terms in multiple regression.

      Exactly we use stochastic gradient descent(随机梯度下降).

      How to make sure the model fits the data well ?

      Use the same r 2 r^2 r2 value that was used for linear regression.

      r 2 r^2 r2 which is called the coefficient of determination, states the portion of change in the data set that is predicted by the model. It’s a value ranging from 0 to 1. With 0 stating that the model has no ability to predict the result and 1 stating that the model predicts the result perfectly.

  • References

转载地址:http://pjge.baihongyu.com/

你可能感兴趣的文章
Nacos编译报错NacosException: endpoint is blank
查看>>
nacos自动刷新配置
查看>>
nacos运行报错问题之一
查看>>
Nacos部署中的一些常见问题汇总
查看>>
NACOS部署,微服务框架之NACOS-单机、集群方式部署
查看>>
Nacos配置Mysql数据库
查看>>
Nacos配置中心中配置文件的创建、微服务读取nacos配置中心
查看>>
Nacos配置中心集群原理及源码分析
查看>>
nacos配置在代码中如何引用
查看>>
nacos配置新增不成功
查看>>
nacos配置自动刷新源码解析
查看>>
nacos集成分布式事务插件Seata的序列化问题,实际上是Seata本身存在bug!!
查看>>
Nacos集群搭建
查看>>
nacos集群搭建
查看>>
nacos集群网络分区对的影响和运维方式
查看>>
nacos集群节点故障对应用的影响以及应急方法
查看>>
nacos集群配置详解
查看>>
Nagios 3.0 Jumpstart Guide For Linux – Overview, Installation and Configuration
查看>>
nagios 实时监控 iptables 状态
查看>>
WAP短信格式解析及在Linux下用C语言实现
查看>>