精华内容
下载资源
问答
  • 该工具包包含hadoop2.6、2.6.3、2.6.4、2.7.1、2.8.0、2.8.1、2.8.3、3.0.0版本的window运行环境。
  • 1 导包 <dependency&...org.apache.hadoop</groupId> <artifactId>hadoop-client</artifactId> <vers

    1 导包

    <dependency>
                <groupId>org.apache.hadoop</groupId>
                <artifactId>hadoop-client</artifactId>
                <version>2.7.3</version>
            </dependency>
            <dependency>
                <groupId>org.apache.hadoop</groupId>
                <artifactId>hadoop-yarn-common</artifactId>
                <version>2.7.3</version>
            </dependency>
            <dependency>
                <groupId>org.apache.hadoop</groupId>
                <artifactId>hadoop-yarn-client</artifactId>
                <version>2.7.3</version>
            </dependency>
            <dependency>
                <groupId>org.apache.hadoop</groupId>
                <artifactId>hadoop-yarn-server-resourcemanager</artifactId>
                <version>2.7.3</version>
            </dependency>
            <dependency>
                <groupId>org.anarres.lzo</groupId>
                <artifactId>lzo-hadoop</artifactId>
                <version>1.0.0</version>
                <scope>compile</scope>
            </dependency>
    

    2.Mapper,Reducer,App三个文件,如下

    import org.apache.hadoop.io.IntWritable;
    import org.apache.hadoop.io.Text;
    import org.apache.hadoop.mapreduce.Reducer;
    import java.io.IOException;
    public class WCReducer extends Reducer<Text, IntWritable, Text, IntWritable>{
        protected void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException {
            int count = 0 ;
            for(IntWritable iw : values){
                count = count + iw.get() ;
            }
            String tno = Thread.currentThread().getName();
            System.out.println(tno + " : WCReducer :" + key.toString() + "=" + count);
            context.write(key,new IntWritable(count));
        }
    }
     
    import org.apache.hadoop.io.IntWritable;
    import org.apache.hadoop.io.LongWritable;
    import org.apache.hadoop.io.Text;
    import org.apache.hadoop.mapreduce.Mapper;
    import java.io.IOException;
    
    public class WCMapper extends Mapper<LongWritable, Text, Text, IntWritable>{
        protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
            Text keyOut = new Text();
            IntWritable valueOut = new IntWritable();
            String[] arr = value.toString().split(" ");
            for(String s : arr){
                keyOut.set(s);
                valueOut.set(1);
                context.write(keyOut,valueOut);
            }
        }
    }
    
    import org.apache.hadoop.conf.Configuration;
    import org.apache.hadoop.fs.FileSystem;
    import org.apache.hadoop.fs.Path;
    import org.apache.hadoop.io.IntWritable;
    import org.apache.hadoop.io.Text;
    import org.apache.hadoop.mapreduce.Job;
    import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
    import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
    import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
    import org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat;
    public class WCApp {
        public static void main(String[] args) throws Exception {
            Configuration conf = new Configuration();
            conf.set("fs.defaultFS", "file:///");
            Job job = Job.getInstance(conf);
            //设置job的各种属性
            job.setJobName("WCApp");                        //作业名称
            job.setJarByClass(WCApp.class);                 //搜索类
            job.setInputFormatClass(TextInputFormat.class); //设置输入格式
            //设置输出格式类
            //job.setOutputFormatClass(SequenceFileOutputFormat.class);
            //添加输入路径
            *FileInputFormat.addInputPath(job,new Path(args[0]));*
            //设置输出路径
            FileOutputFormat.setOutputPath(job,new Path(args[1]));
            //设置最大切片数
            //FileInputFormat.setMaxInputSplitSize(job,13);
            //最小切片数
            //FileInputFormat.setMinInputSplitSize(job,1L);
            //设置分区类
            job.setPartitionerClass(MyPartitioner.class);   //设置自定义分区
            //设置合成类
            job.setCombinerClass(WCReducer.class);          //设置combiner类
            job.setMapperClass(WCMapper.class);             //mapper类
            job.setReducerClass(WCReducer.class);           //reducer类
            job.setNumReduceTasks(3);                       //reduce个数
            job.setMapOutputKeyClass(Text.class);           //
            job.setMapOutputValueClass(IntWritable.class);  //
            job.setOutputKeyClass(Text.class);
            job.setOutputValueClass(IntWritable.class);     //
            job.waitForCompletion(true);
        }
    }
    

    3.安装hadoop,解压hadoop压缩包,并配置环境变量。

    4.运行报错
    (null) entry in command string: null chmod 0700
    根据网络上的两种解决办法。第一种
    在程序中指定hadoop.home.dir

    System.setProperty("hadoop.home.dir","c:/Users/Administrator/Desktop/hadoop-2.7.7/bin" );
    

    毫无卵用
    第二种,参考 : https://ask.hellobi.com/blog/jack/5063
    第二种 第一步:
    下载winutils.exe,libwinutils.lib 拷贝到%HADOOP_HOME%\bin目录
    毫无卵用
    第二步:
    下载hadoop.dll,并拷贝到c:\windows\system32目录中
    报的错终于不一样了,如下:
    file permissions : java.io.IOException: (null) entry in command string: null ls -F

    再百度,说输入路径是需要写具体路径,我找到RUN->Edit Configuration,如下图
    在这里插入图片描述
    这里一定注意,第一个是输入参数,要指定具体文件,第二个是输出参数,out文件夹必须没有,有就要删掉。

    展开全文
  • 解决eclipse运行java程序问题。只需要把这个文件放到hadoop-2.8.1文件夹下的bin目录下即可。记得要配置运行前的参数,以及运行前也需要的环境变量呀,不然运行结果可能有错。
  • window上连接集群跑hadoop问题之java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows. 标签: windowhadoopwordcount 20
    http://blog.csdn.net/xubo245/article/details/50587660
    

    window上连接集群跑hadoop问题之java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.

    标签: windowhadoopwordcount
    1172人阅读 评论(0) 收藏 举报
    本文章已收录于:

    环境:

    window7 64位

    集群hadoop2.6.0,ubuntu



    window上连接集群跑Hadoop问题之Java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.$Windows.

    参照http://blog.csdn.NET/congcong68/article/details/42043093即可解决



    主要在修改NativeIO文件


    在project下新建package:org.apache.hadoop.io.nativeio

    将NativeIO.java文件copy过来。

    修改557行:

    1. return true;  
    2.    return access0(path, desiredAccess.accessRight());  
    save_snippets.png
        	return true;
    //      return access0(path, desiredAccess.accessRight());

    然后运行Wordcount:

    1. //public class WordCount {  
    2. //  
    3. //}  
    4.   
    5. import java.io.IOException;    
    6. import java.util.StringTokenizer;    
    7.     
    8. import org.apache.hadoop.conf.Configuration;    
    9. import org.apache.hadoop.fs.Path;    
    10. import org.apache.hadoop.io.IntWritable;    
    11. import org.apache.hadoop.io.Text;    
    12. import org.apache.hadoop.mapreduce.Job;    
    13. import org.apache.hadoop.mapreduce.Mapper;    
    14. import org.apache.hadoop.mapreduce.Reducer;    
    15. import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;    
    16. import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;    
    17.     
    18. public class WordCount {    
    19.     
    20.   public static class TokenizerMapper    
    21.        extends Mapper<Object, Text, Text, IntWritable>{    
    22.     
    23.     private final static IntWritable one = new IntWritable(1);    
    24.     private Text word = new Text();    
    25.     
    26.     public void map(Object key, Text value, Context context    
    27.                     ) throws IOException, InterruptedException {    
    28.       StringTokenizer itr = new StringTokenizer(value.toString());    
    29.       while (itr.hasMoreTokens()) {    
    30.         word.set(itr.nextToken());    
    31.         context.write(word, one);    
    32.       }    
    33.     }    
    34.   }    
    35.     
    36.   public static class IntSumReducer    
    37.        extends Reducer<Text,IntWritable,Text,IntWritable> {    
    38.     private IntWritable result = new IntWritable();    
    39.     
    40.     public void reduce(Text key, Iterable<IntWritable> values,    
    41.                        Context context    
    42.                        ) throws IOException, InterruptedException {    
    43.       int sum = 0;    
    44.       for (IntWritable val : values) {    
    45.         sum += val.get();    
    46.       }    
    47.       result.set(sum);    
    48.       context.write(key, result);    
    49.     }    
    50.   }    
    51.     
    52.   public static void main(String[] args) throws Exception {    
    53.     Configuration conf = new Configuration();    
    54.     Job job = Job.getInstance(conf, "word count");    
    55.     job.setJarByClass(WordCount.class);    
    56.     job.setMapperClass(TokenizerMapper.class);    
    57.     job.setCombinerClass(IntSumReducer.class);    
    58.     job.setReducerClass(IntSumReducer.class);    
    59.     job.setOutputKeyClass(Text.class);    
    60.     job.setOutputValueClass(IntWritable.class);    
    61.     FileInputFormat.addInputPath(job, new Path(args[0]));    
    62.     FileOutputFormat.setOutputPath(job, new Path(args[1]));    
    63.     System.exit(job.waitForCompletion(true) ? 0 : 1);    
    64.   }    
    65. }   
    save_snippets.png
    //public class WordCount {
    //
    //}
    
    import java.io.IOException;  
    import java.util.StringTokenizer;  
      
    import org.apache.hadoop.conf.Configuration;  
    import org.apache.hadoop.fs.Path;  
    import org.apache.hadoop.io.IntWritable;  
    import org.apache.hadoop.io.Text;  
    import org.apache.hadoop.mapreduce.Job;  
    import org.apache.hadoop.mapreduce.Mapper;  
    import org.apache.hadoop.mapreduce.Reducer;  
    import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;  
    import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;  
      
    public class WordCount {  
      
      public static class TokenizerMapper  
           extends Mapper<Object, Text, Text, IntWritable>{  
      
        private final static IntWritable one = new IntWritable(1);  
        private Text word = new Text();  
      
        public void map(Object key, Text value, Context context  
                        ) throws IOException, InterruptedException {  
          StringTokenizer itr = new StringTokenizer(value.toString());  
          while (itr.hasMoreTokens()) {  
            word.set(itr.nextToken());  
            context.write(word, one);  
          }  
        }  
      }  
      
      public static class IntSumReducer  
           extends Reducer<Text,IntWritable,Text,IntWritable> {  
        private IntWritable result = new IntWritable();  
      
        public void reduce(Text key, Iterable<IntWritable> values,  
                           Context context  
                           ) throws IOException, InterruptedException {  
          int sum = 0;  
          for (IntWritable val : values) {  
            sum += val.get();  
          }  
          result.set(sum);  
          context.write(key, result);  
        }  
      }  
      
      public static void main(String[] args) throws Exception {  
        Configuration conf = new Configuration();  
        Job job = Job.getInstance(conf, "word count");  
        job.setJarByClass(WordCount.class);  
        job.setMapperClass(TokenizerMapper.class);  
        job.setCombinerClass(IntSumReducer.class);  
        job.setReducerClass(IntSumReducer.class);  
        job.setOutputKeyClass(Text.class);  
        job.setOutputValueClass(IntWritable.class);  
        FileInputFormat.addInputPath(job, new Path(args[0]));  
        FileOutputFormat.setOutputPath(job, new Path(args[1]));  
        System.exit(job.waitForCompletion(true) ? 0 : 1);  
      }  
    } 

    arg为:

    1. hdfs://219.219.220.149:9000/input  
    2. hdfs://219.219.220.149:9000/output0126  
    save_snippets_01.png
    hdfs://219.219.220.149:9000/input
    hdfs://219.219.220.149:9000/output0126
    

    源文件:


    结果文件:


    0
    1

    我的同类文章

    展开全文
  • windows 运行hadoop单机版程序执行过程

    千次阅读 2016-10-05 22:35:35
    2016-10-05 22:23:20,961 WARN [main] mapreduce.JobResourceUploader (JobResourceUploader.java:uploadFiles(64)) - Hadoop command-line option parsing not performed. Implement the Tool interface and ...
    2016-10-05 22:23:20,565 INFO  [main] Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(1129)) - session.id is deprecated. Instead, use dfs.metrics.session-id
    2016-10-05 22:23:20,569 INFO  [main] jvm.JvmMetrics (JvmMetrics.java:init(76)) - Initializing JVM Metrics with processName=JobTracker, sessionId=
    2016-10-05 22:23:20,961 WARN  [main] mapreduce.JobResourceUploader (JobResourceUploader.java:uploadFiles(64)) - Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
    2016-10-05 22:23:20,964 WARN  [main] mapreduce.JobResourceUploader (JobResourceUploader.java:uploadFiles(171)) - No job jar file set.  User classes may not be found. See Job or Job#setJar(String).
    2016-10-05 22:23:21,790 INFO  [main] input.FileInputFormat (FileInputFormat.java:listStatus(281)) - Total input paths to process : 4
    2016-10-05 22:23:21,848 INFO  [main] mapreduce.JobSubmitter (JobSubmitter.java:submitJobInternal(199)) - number of splits:4

    2016-10-05 22:23:22,031 INFO  [main] mapreduce.JobSubmitter (JobSubmitter.java:printTokens(288)) - Submitting tokens for job: job_local1665328582_0001
    2016-10-05 22:23:22,365 INFO  [main] mapreduce.Job (Job.java:submit(1301)) - The url to track the job: http://localhost:8080/
    2016-10-05 22:23:22,366 INFO  [main] mapreduce.Job (Job.java:monitorAndPrintJob(1346)) - Running job: job_local1665328582_0001
    2016-10-05 22:23:22,367 INFO  [Thread-8] mapred.LocalJobRunner (LocalJobRunner.java:createOutputCommitter(471)) - OutputCommitter set in config null
    2016-10-05 22:23:22,377 INFO  [Thread-8] mapred.LocalJobRunner (LocalJobRunner.java:createOutputCommitter(489)) - OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
    2016-10-05 22:23:22,442 INFO  [Thread-8] mapred.LocalJobRunner (LocalJobRunner.java:runTasks(448)) - Waiting for map tasks
    2016-10-05 22:23:22,442 INFO  [LocalJobRunner Map Task Executor #0] mapred.LocalJobRunner (LocalJobRunner.java:run(224)) - Starting task: attempt_local1665328582_0001_m_000000_0
    2016-10-05 22:23:22,495 INFO  [LocalJobRunner Map Task Executor #0] util.ProcfsBasedProcessTree (ProcfsBasedProcessTree.java:isAvailable(181)) - ProcfsBasedProcessTree currently is supported only on Linux.
    2016-10-05 22:23:22,613 INFO  [LocalJobRunner Map Task Executor #0] mapred.Task (Task.java:initialize(587)) -  Using ResourceCalculatorProcessTree : org.apache.hadoop.yarn.util.WindowsBasedProcessTree@20b621d
    2016-10-05 22:23:22,620 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:runNewMapper(753)) - Processing split: file:/D:/hadoop/wordCountInput/hdfs-default.txt:0+80682
    2016-10-05 22:23:22,676 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:setEquator(1202)) - (EQUATOR) 0 kvi 26214396(104857584)
    2016-10-05 22:23:22,676 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:init(995)) - mapreduce.task.io.sort.mb: 100
    2016-10-05 22:23:22,676 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:init(996)) - soft limit at 83886080
    2016-10-05 22:23:22,677 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:init(997)) - bufstart = 0; bufvoid = 104857600
    2016-10-05 22:23:22,677 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:init(998)) - kvstart = 26214396; length = 6553600
    2016-10-05 22:23:22,684 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:createSortingCollector(402)) - Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
    2016-10-05 22:23:22,959 INFO  [LocalJobRunner Map Task Executor #0] mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(591)) - 
    2016-10-05 22:23:22,959 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:flush(1457)) - Starting flush of map output
    2016-10-05 22:23:22,959 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:flush(1475)) - Spilling map output
    2016-10-05 22:23:22,959 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:flush(1476)) - bufstart = 0; bufend = 133408; bufvoid = 104857600
    2016-10-05 22:23:22,960 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:flush(1478)) - kvstart = 26214396(104857584); kvend = 26159356(104637424); length = 55041/6553600
    2016-10-05 22:23:23,217 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:sortAndSpill(1660)) - Finished spill 0
    2016-10-05 22:23:23,224 INFO  [LocalJobRunner Map Task Executor #0] mapred.Task (Task.java:done(1001)) - Task:attempt_local1665328582_0001_m_000000_0 is done. And is in the process of committing
    2016-10-05 22:23:23,236 INFO  [LocalJobRunner Map Task Executor #0] mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(591)) - map
    2016-10-05 22:23:23,237 INFO  [LocalJobRunner Map Task Executor #0] mapred.Task (Task.java:sendDone(1121)) - Task 'attempt_local1665328582_0001_m_000000_0' done.
    2016-10-05 22:23:23,237 INFO  [LocalJobRunner Map Task Executor #0] mapred.LocalJobRunner (LocalJobRunner.java:run(249)) - Finishing task: attempt_local1665328582_0001_m_000000_0
    2016-10-05 22:23:23,237 INFO  [LocalJobRunner Map Task Executor #0] mapred.LocalJobRunner (LocalJobRunner.java:run(224)) - Starting task: attempt_local1665328582_0001_m_000001_0
    2016-10-05 22:23:23,239 INFO  [LocalJobRunner Map Task Executor #0] util.ProcfsBasedProcessTree (ProcfsBasedProcessTree.java:isAvailable(181)) - ProcfsBasedProcessTree currently is supported only on Linux.
    2016-10-05 22:23:23,333 INFO  [LocalJobRunner Map Task Executor #0] mapred.Task (Task.java:initialize(587)) -  Using ResourceCalculatorProcessTree : org.apache.hadoop.yarn.util.WindowsBasedProcessTree@11eaf203
    2016-10-05 22:23:23,336 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:runNewMapper(753)) - Processing split: file:/D:/hadoop/wordCountInput/mapred-default.txt:0+71935
    2016-10-05 22:23:23,368 INFO  [main] mapreduce.Job (Job.java:monitorAndPrintJob(1367)) - Job job_local1665328582_0001 running in uber mode : false
    2016-10-05 22:23:23,371 INFO  [main] mapreduce.Job (Job.java:monitorAndPrintJob(1374)) -  map 100% reduce 0%
    2016-10-05 22:23:23,371 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:setEquator(1202)) - (EQUATOR) 0 kvi 26214396(104857584)
    2016-10-05 22:23:23,371 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:init(995)) - mapreduce.task.io.sort.mb: 100
    2016-10-05 22:23:23,371 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:init(996)) - soft limit at 83886080
    2016-10-05 22:23:23,372 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:init(997)) - bufstart = 0; bufvoid = 104857600
    2016-10-05 22:23:23,372 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:init(998)) - kvstart = 26214396; length = 6553600
    2016-10-05 22:23:23,374 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:createSortingCollector(402)) - Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
    2016-10-05 22:23:23,433 INFO  [LocalJobRunner Map Task Executor #0] mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(591)) - 
    2016-10-05 22:23:23,433 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:flush(1457)) - Starting flush of map output
    2016-10-05 22:23:23,433 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:flush(1475)) - Spilling map output
    2016-10-05 22:23:23,433 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:flush(1476)) - bufstart = 0; bufend = 115022; bufvoid = 104857600
    2016-10-05 22:23:23,434 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:flush(1478)) - kvstart = 26214396(104857584); kvend = 26169196(104676784); length = 45201/6553600
    2016-10-05 22:23:23,482 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:sortAndSpill(1660)) - Finished spill 0
    2016-10-05 22:23:23,490 INFO  [LocalJobRunner Map Task Executor #0] mapred.Task (Task.java:done(1001)) - Task:attempt_local1665328582_0001_m_000001_0 is done. And is in the process of committing
    2016-10-05 22:23:23,496 INFO  [LocalJobRunner Map Task Executor #0] mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(591)) - map
    2016-10-05 22:23:23,496 INFO  [LocalJobRunner Map Task Executor #0] mapred.Task (Task.java:sendDone(1121)) - Task 'attempt_local1665328582_0001_m_000001_0' done.
    2016-10-05 22:23:23,496 INFO  [LocalJobRunner Map Task Executor #0] mapred.LocalJobRunner (LocalJobRunner.java:run(249)) - Finishing task: attempt_local1665328582_0001_m_000001_0
    2016-10-05 22:23:23,496 INFO  [LocalJobRunner Map Task Executor #0] mapred.LocalJobRunner (LocalJobRunner.java:run(224)) - Starting task: attempt_local1665328582_0001_m_000002_0
    2016-10-05 22:23:23,499 INFO  [LocalJobRunner Map Task Executor #0] util.ProcfsBasedProcessTree (ProcfsBasedProcessTree.java:isAvailable(181)) - ProcfsBasedProcessTree currently is supported only on Linux.
    2016-10-05 22:23:23,592 INFO  [LocalJobRunner Map Task Executor #0] mapred.Task (Task.java:initialize(587)) -  Using ResourceCalculatorProcessTree : org.apache.hadoop.yarn.util.WindowsBasedProcessTree@d86fc54
    2016-10-05 22:23:23,594 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:runNewMapper(753)) - Processing split: file:/D:/hadoop/wordCountInput/yarn-default.txt:0+61727
    2016-10-05 22:23:23,630 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:setEquator(1202)) - (EQUATOR) 0 kvi 26214396(104857584)
    2016-10-05 22:23:23,630 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:init(995)) - mapreduce.task.io.sort.mb: 100
    2016-10-05 22:23:23,630 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:init(996)) - soft limit at 83886080
    2016-10-05 22:23:23,630 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:init(997)) - bufstart = 0; bufvoid = 104857600
    2016-10-05 22:23:23,631 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:init(998)) - kvstart = 26214396; length = 6553600

    2016-10-05 22:23:23,632 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:createSortingCollector(402)) - Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
    2016-10-05 22:23:23,667 INFO  [LocalJobRunner Map Task Executor #0] mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(591)) - 
    2016-10-05 22:23:23,667 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:flush(1457)) - Starting flush of map output
    2016-10-05 22:23:23,667 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:flush(1475)) - Spilling map output
    2016-10-05 22:23:23,667 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:flush(1476)) - bufstart = 0; bufend = 102541; bufvoid = 104857600
    2016-10-05 22:23:23,667 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:flush(1478)) - kvstart = 26214396(104857584); kvend = 26171956(104687824); length = 42441/6553600
    2016-10-05 22:23:23,720 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:sortAndSpill(1660)) - Finished spill 0
    2016-10-05 22:23:23,728 INFO  [LocalJobRunner Map Task Executor #0] mapred.Task (Task.java:done(1001)) - Task:attempt_local1665328582_0001_m_000002_0 is done. And is in the process of committing
    2016-10-05 22:23:23,733 INFO  [LocalJobRunner Map Task Executor #0] mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(591)) - map
    2016-10-05 22:23:23,733 INFO  [LocalJobRunner Map Task Executor #0] mapred.Task (Task.java:sendDone(1121)) - Task 'attempt_local1665328582_0001_m_000002_0' done.
    2016-10-05 22:23:23,733 INFO  [LocalJobRunner Map Task Executor #0] mapred.LocalJobRunner (LocalJobRunner.java:run(249)) - Finishing task: attempt_local1665328582_0001_m_000002_0
    2016-10-05 22:23:23,733 INFO  [LocalJobRunner Map Task Executor #0] mapred.LocalJobRunner (LocalJobRunner.java:run(224)) - Starting task: attempt_local1665328582_0001_m_000003_0
    2016-10-05 22:23:23,735 INFO  [LocalJobRunner Map Task Executor #0] util.ProcfsBasedProcessTree (ProcfsBasedProcessTree.java:isAvailable(181)) - ProcfsBasedProcessTree currently is supported only on Linux.
    2016-10-05 22:23:23,881 INFO  [LocalJobRunner Map Task Executor #0] mapred.Task (Task.java:initialize(587)) -  Using ResourceCalculatorProcessTree : org.apache.hadoop.yarn.util.WindowsBasedProcessTree@2320356e
    2016-10-05 22:23:23,884 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:runNewMapper(753)) - Processing split: file:/D:/hadoop/wordCountInput/core-default.txt:0+56307
    2016-10-05 22:23:23,941 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:setEquator(1202)) - (EQUATOR) 0 kvi 26214396(104857584)
    2016-10-05 22:23:23,941 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:init(995)) - mapreduce.task.io.sort.mb: 100
    2016-10-05 22:23:23,941 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:init(996)) - soft limit at 83886080
    2016-10-05 22:23:23,941 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:init(997)) - bufstart = 0; bufvoid = 104857600
    2016-10-05 22:23:23,941 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:init(998)) - kvstart = 26214396; length = 6553600
    2016-10-05 22:23:23,942 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:createSortingCollector(402)) - Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
    2016-10-05 22:23:23,982 INFO  [LocalJobRunner Map Task Executor #0] mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(591)) - 
    2016-10-05 22:23:23,982 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:flush(1457)) - Starting flush of map output
    2016-10-05 22:23:23,982 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:flush(1475)) - Spilling map output
    2016-10-05 22:23:23,982 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:flush(1476)) - bufstart = 0; bufend = 88297; bufvoid = 104857600
    2016-10-05 22:23:23,983 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:flush(1478)) - kvstart = 26214396(104857584); kvend = 26180668(104722672); length = 33729/6553600
    2016-10-05 22:23:24,042 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:sortAndSpill(1660)) - Finished spill 0
    2016-10-05 22:23:24,049 INFO  [LocalJobRunner Map Task Executor #0] mapred.Task (Task.java:done(1001)) - Task:attempt_local1665328582_0001_m_000003_0 is done. And is in the process of committing
    2016-10-05 22:23:24,054 INFO  [LocalJobRunner Map Task Executor #0] mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(591)) - map
    2016-10-05 22:23:24,054 INFO  [LocalJobRunner Map Task Executor #0] mapred.Task (Task.java:sendDone(1121)) - Task 'attempt_local1665328582_0001_m_000003_0' done.
    2016-10-05 22:23:24,054 INFO  [LocalJobRunner Map Task Executor #0] mapred.LocalJobRunner (LocalJobRunner.java:run(249)) - Finishing task: attempt_local1665328582_0001_m_000003_0
    2016-10-05 22:23:24,055 INFO  [Thread-8] mapred.LocalJobRunner (LocalJobRunner.java:runTasks(456)) - map task executor complete.
    2016-10-05 22:23:24,060 INFO  [Thread-8] mapred.LocalJobRunner (LocalJobRunner.java:runTasks(448)) - Waiting for reduce tasks
    2016-10-05 22:23:24,060 INFO  [pool-3-thread-1] mapred.LocalJobRunner (LocalJobRunner.java:run(302)) - Starting task: attempt_local1665328582_0001_r_000000_0
    2016-10-05 22:23:24,079 INFO  [pool-3-thread-1] util.ProcfsBasedProcessTree (ProcfsBasedProcessTree.java:isAvailable(181)) - ProcfsBasedProcessTree currently is supported only on Linux.
    2016-10-05 22:23:24,206 INFO  [pool-3-thread-1] mapred.Task (Task.java:initialize(587)) -  Using ResourceCalculatorProcessTree : org.apache.hadoop.yarn.util.WindowsBasedProcessTree@6df0e581
    2016-10-05 22:23:24,212 INFO  [pool-3-thread-1] mapred.ReduceTask (ReduceTask.java:run(362)) - Using ShuffleConsumerPlugin: org.apache.hadoop.mapreduce.task.reduce.Shuffle@31aba945
    2016-10-05 22:23:24,233 INFO  [pool-3-thread-1] reduce.MergeManagerImpl (MergeManagerImpl.java:<init>(197)) - MergerManager: memoryLimit=1988689920, maxSingleShuffleLimit=497172480, mergeThreshold=1312535424, ioSortFactor=10, memToMemMergeOutputsThreshold=10
    2016-10-05 22:23:24,236 INFO  [EventFetcher for fetching Map Completion Events] reduce.EventFetcher (EventFetcher.java:run(61)) - attempt_local1665328582_0001_r_000000_0 Thread started: EventFetcher for fetching Map Completion Events
    2016-10-05 22:23:24,326 INFO  [localfetcher#1] reduce.LocalFetcher (LocalFetcher.java:copyMapOutput(141)) - localfetcher#1 about to shuffle output of map attempt_local1665328582_0001_m_000000_0 decomp: 160932 len: 160936 to MEMORY
    2016-10-05 22:23:24,337 INFO  [localfetcher#1] reduce.InMemoryMapOutput (InMemoryMapOutput.java:shuffle(100)) - Read 160932 bytes from map-output for attempt_local1665328582_0001_m_000000_0
    2016-10-05 22:23:24,339 INFO  [localfetcher#1] reduce.MergeManagerImpl (MergeManagerImpl.java:closeInMemoryFile(315)) - closeInMemoryFile -> map-output of size: 160932, inMemoryMapOutputs.size() -> 1, commitMemory -> 0, usedMemory ->160932
    2016-10-05 22:23:24,346 INFO  [localfetcher#1] reduce.LocalFetcher (LocalFetcher.java:copyMapOutput(141)) - localfetcher#1 about to shuffle output of map attempt_local1665328582_0001_m_000003_0 decomp: 105166 len: 105170 to MEMORY
    2016-10-05 22:23:24,349 INFO  [localfetcher#1] reduce.InMemoryMapOutput (InMemoryMapOutput.java:shuffle(100)) - Read 105166 bytes from map-output for attempt_local1665328582_0001_m_000003_0
    2016-10-05 22:23:24,350 INFO  [localfetcher#1] reduce.MergeManagerImpl (MergeManagerImpl.java:closeInMemoryFile(315)) - closeInMemoryFile -> map-output of size: 105166, inMemoryMapOutputs.size() -> 2, commitMemory -> 160932, usedMemory ->266098
    2016-10-05 22:23:24,356 INFO  [localfetcher#1] reduce.LocalFetcher (LocalFetcher.java:copyMapOutput(141)) - localfetcher#1 about to shuffle output of map attempt_local1665328582_0001_m_000002_0 decomp: 123765 len: 123769 to MEMORY
    2016-10-05 22:23:24,358 INFO  [localfetcher#1] reduce.InMemoryMapOutput (InMemoryMapOutput.java:shuffle(100)) - Read 123765 bytes from map-output for attempt_local1665328582_0001_m_000002_0
    2016-10-05 22:23:24,358 INFO  [localfetcher#1] reduce.MergeManagerImpl (MergeManagerImpl.java:closeInMemoryFile(315)) - closeInMemoryFile -> map-output of size: 123765, inMemoryMapOutputs.size() -> 3, commitMemory -> 266098, usedMemory ->389863
    2016-10-05 22:23:24,364 INFO  [localfetcher#1] reduce.LocalFetcher (LocalFetcher.java:copyMapOutput(141)) - localfetcher#1 about to shuffle output of map attempt_local1665328582_0001_m_000001_0 decomp: 137626 len: 137630 to MEMORY
    2016-10-05 22:23:24,367 INFO  [localfetcher#1] reduce.InMemoryMapOutput (InMemoryMapOutput.java:shuffle(100)) - Read 137626 bytes from map-output for attempt_local1665328582_0001_m_000001_0
    2016-10-05 22:23:24,367 INFO  [localfetcher#1] reduce.MergeManagerImpl (MergeManagerImpl.java:closeInMemoryFile(315)) - closeInMemoryFile -> map-output of size: 137626, inMemoryMapOutputs.size() -> 4, commitMemory -> 389863, usedMemory ->527489
    2016-10-05 22:23:24,367 INFO  [EventFetcher for fetching Map Completion Events] reduce.EventFetcher (EventFetcher.java:run(76)) - EventFetcher is interrupted.. Returning
    2016-10-05 22:23:24,368 INFO  [pool-3-thread-1] mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(591)) - 4 / 4 copied.
    2016-10-05 22:23:24,369 INFO  [pool-3-thread-1] reduce.MergeManagerImpl (MergeManagerImpl.java:finalMerge(687)) - finalMerge called with 4 in-memory map-outputs and 0 on-disk map-outputs
    2016-10-05 22:23:24,384 INFO  [pool-3-thread-1] mapred.Merger (Merger.java:merge(597)) - Merging 4 sorted segments
    2016-10-05 22:23:24,384 INFO  [pool-3-thread-1] mapred.Merger (Merger.java:merge(696)) - Down to the last merge-pass, with 4 segments left of total size: 527477 bytes
    2016-10-05 22:23:24,886 INFO  [pool-3-thread-1] reduce.MergeManagerImpl (MergeManagerImpl.java:finalMerge(754)) - Merged 4 segments, 527489 bytes to disk to satisfy reduce memory limit
    2016-10-05 22:23:24,887 INFO  [pool-3-thread-1] reduce.MergeManagerImpl (MergeManagerImpl.java:finalMerge(784)) - Merging 1 files, 527487 bytes from disk
    2016-10-05 22:23:24,888 INFO  [pool-3-thread-1] reduce.MergeManagerImpl (MergeManagerImpl.java:finalMerge(799)) - Merging 0 segments, 0 bytes from memory into reduce
    2016-10-05 22:23:24,888 INFO  [pool-3-thread-1] mapred.Merger (Merger.java:merge(597)) - Merging 1 sorted segments
    2016-10-05 22:23:24,889 INFO  [pool-3-thread-1] mapred.Merger (Merger.java:merge(696)) - Down to the last merge-pass, with 1 segments left of total size: 527480 bytes
    2016-10-05 22:23:24,889 INFO  [pool-3-thread-1] mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(591)) - 4 / 4 copied.
    2016-10-05 22:23:24,918 INFO  [pool-3-thread-1] Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(1129)) - mapred.skip.on is deprecated. Instead, use mapreduce.job.skiprecords
    2016-10-05 22:23:25,298 INFO  [pool-3-thread-1] mapred.Task (Task.java:done(1001)) - Task:attempt_local1665328582_0001_r_000000_0 is done. And is in the process of committing
    2016-10-05 22:23:25,302 INFO  [pool-3-thread-1] mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(591)) - 4 / 4 copied.
    2016-10-05 22:23:25,302 INFO  [pool-3-thread-1] mapred.Task (Task.java:commit(1162)) - Task attempt_local1665328582_0001_r_000000_0 is allowed to commit now
    2016-10-05 22:23:25,306 INFO  [pool-3-thread-1] output.FileOutputCommitter (FileOutputCommitter.java:commitTask(439)) - Saved output of task 'attempt_local1665328582_0001_r_000000_0' to file:/D:/hadoop/wordCountOuput/_temporary/0/task_local1665328582_0001_r_000000
    2016-10-05 22:23:25,308 INFO  [pool-3-thread-1] mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(591)) - reduce > reduce
    2016-10-05 22:23:25,308 INFO  [pool-3-thread-1] mapred.Task (Task.java:sendDone(1121)) - Task 'attempt_local1665328582_0001_r_000000_0' done.
    2016-10-05 22:23:25,308 INFO  [pool-3-thread-1] mapred.LocalJobRunner (LocalJobRunner.java:run(325)) - Finishing task: attempt_local1665328582_0001_r_000000_0
    2016-10-05 22:23:25,308 INFO  [Thread-8] mapred.LocalJobRunner (LocalJobRunner.java:runTasks(456)) - reduce task executor complete.
    2016-10-05 22:23:25,374 INFO  [main] mapreduce.Job (Job.java:monitorAndPrintJob(1374)) -  map 100% reduce 100%
    2016-10-05 22:23:25,374 INFO  [main] mapreduce.Job (Job.java:monitorAndPrintJob(1385)) - Job job_local1665328582_0001 completed successfully
    2016-10-05 22:23:25,399 INFO  [main] mapreduce.Job (Job.java:monitorAndPrintJob(1392)) - Counters: 33
    File System Counters
    FILE: Number of bytes read=2051066
    FILE: Number of bytes written=3906069
    FILE: Number of read operations=0
    FILE: Number of large read operations=0

    FILE: Number of write operations=0
    Map-Reduce Framework
    Map input records=7442
    Map output records=44106
    Map output bytes=439268

    Map output materialized bytes=527505
    Input split bytes=450
    Combine input records=0
    Combine output records=0
    Reduce input groups=5286
    Reduce shuffle bytes=527505
    Reduce input records=44106
    Reduce output records=5286
    Spilled Records=88212
    Shuffled Maps =4
    Failed Shuffles=0
    Merged Map outputs=4
    GC time elapsed (ms)=22
    CPU time spent (ms)=0
    Physical memory (bytes) snapshot=0
    Virtual memory (bytes) snapshot=0
    Total committed heap usage (bytes)=1901461504
    Shuffle Errors
    BAD_ID=0
    CONNECTION=0
    IO_ERROR=0
    WRONG_LENGTH=0
    WRONG_MAP=0
    WRONG_REDUCE=0
    File Input Format Counters 
    Bytes Read=270651
    File Output Format Counters 

    Bytes Written=107107



    字段的含义请参考:http://blog.csdn.net/xfg0218/article/details/52740327


    展开全文
  • 在我们编写好RPC协议代码后,准备在Windows运行,可是有的人会报类似错:Failed to locate the winutils binary in the hadoop binary path java.io.IOException: Could not locate executablenull\bin\winutils....

    在我们编写好RPC协议代码后,准备在Windows上运行,可是有的人会报类似错:Failed to locate the winutils binary in the hadoop binary path java.io.IOException: Could not locate executablenull\bin\winutils.exe in the Hadoop binaries.
    这是因为我们的当我们在Windows上运行Hadoop程序时需要两个配置文件:winutils.exe和hadoop.dll,将这两个文件导入到系统时还需要配置环境变量。

    出现问题:
    (1)缺少winutils.exe回报如下错误:
    Could not locate executable null \bin\winutils.exe in the hadoop binaries
    (2)缺少hadoop.dll
    错误如下:
    Unable to load native-hadoop library for your platform… using builtin-java classes where applicable

    解决办法:
    (1)配置文件下载:
    可以到https://github.com/上搜索winutils.exe下载
    (2)环境变量配置:
    ( * )将两个下载好的文件分别放到Hadoop路径下的bin目录中和C盘中的Windows中的System32中;

    ( * )右键点击“此电脑”图标—》点击属性----》高级系统设置—》环境变量—》系统变量中找到path—》编辑—》新建—》将Hadoop的解压路径bin目录的路径复制粘贴下来—》点击确定;
    在这里插入图片描述
    在这里插入图片描述
    ( * )右键点击“此电脑”图标—》点击属性----》高级系统设置—》环境变量—》新建—》变量名:HADOOP_HOME—》变量值:“hadoop的解压路径”;
    在这里插入图片描述

    展开全文
  • Exception in thread "main" java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)Z at org.apache.hadoop.io.nativeio.NativeIO$Windows.access0...
  • window eclipse连接hadoop集群,本地运行wordcount,报以下错误,尝试网络上的方法无果,如:换64JDK,添加hadoop_home,path,以及在hadoop\bin和c:\windows\system32下放hadoop.dll。 解决:删除hadoop\bin\hadoop....
  • Windows运行Hadoop

    2016-06-28 18:06:00
    Windows运行Hadoop,通常有两种方式:一种是用VM方式安装一个Linux操作系统,这样基本可以实现全Linux环境的Hadoop运行;另一种是通过Cygwin模拟Linux环境。后者的好处是使用比较方便,安装过程也简单。在这里咱们...
  • Windows系统Hadoop

    2018-08-26 14:34:43
    该压缩包是在官方包的基础上进行修改的,将Windows运行hadoop所需配置文件和部分插件已经导入,电脑只需安装64位java环境,修改hadoop-env.cmd配置即可使用
  • 2.在windows本地运行hadoop需要本地文件系统的支持,下载配置  https://github.com/steveloughran/winutils 3.解压下载文件后 将该bin目录下的文件全部复制到hadoop/bin里,相同的文件可以选择不覆盖 4.在...
  • windows运行hadoop2.7.2

    2016-12-24 22:29:00
    1.下载hadoop-2.7.2.tar.gz 2.解压到D:\hadoop\ 3.配置HADOOP_HOME环境变量 4.将%HADOOP_HOME%\bin目录添加到path环境变量中 5.配置JAVA_HOME环境变量,注意路径不要带空格 6.下载hadoop-common-bin工具包(注意...
  • windows xp sp3下hadoop运行需要的hadoop.dll和winutils.exe
  • windows 下 如何运行hadoop-2.6
  • 下载hadooponwindows-master.zip【**能支持在windows运行hadoop的工具】 安装配置 这篇文章写的比较好,简单几步就配置成功了 Windows平台安装配置Hadoop 问题解决 WIN7下解压文件提示:可能需要以管理身份运行? ...
  • windows本地运行hadoop的MapReduce程序

    千次阅读 2018-11-13 10:51:31
    1.下载hadoo安装到windows本地   地址 https://archive.apache.org/dist/hadoop/core/hadoop-2.6.0/hadoop-2.6.0.tar.gz 2. 解压之后进行设置环境变量  新建 HADOOP_HOME D:\software\hadoop-2.6.0  Path...
  • 将winutils.exe拷贝到windowshadoop的bin目录 将hadoop.dll与winutils.exe拷贝到目录C:\Windows\System32 下 设置环境变量HADOOP_HOME为.../hadoop2.XXX/ 打开eclipse/idea导入jar包(mapreduce,yarn,hdfs,common)...
  • hadoop Windows 运行环境

    2018-08-07 17:09:42
    hadoop Windows 运行环境驱动包,使hadoop能够在Windows系统上面运行
  • windows运行hadoop的wordcount示例超详细过程 特别感谢参考博文中的两位博主的分享 一.下载2个文件 https://github.com/MuhammadBilalYar/HADOOP-INSTALLATION-ON-WINDOW-10/blob/master/MapReduceClient.jar ...
  • 参考博客:《初学hadoop,windows下安装》1.下载JDK,并设置JAVA_HOME。(使用不带空格的路径,比如Program Files,将在后面的配置中报错!)假设JAVA_HOME路径为C:\Java\jdk1.8.0_73这里下载使用的是2.8.3的二进制版本...
  • Windows下面运行hadoop的MapReduce程序的方法: 1.下载hadoop的安装包,这里使用的是"hadoop-2.6.4.tar.gz": 2.将安装包直接解压到D盘根目录: 3.配置环境变量: 4.下载hadoop的eclipse插件,并将...
  • windows运行Hadoop

    2011-05-05 22:36:47
    原文引自 ... 由于hadoop所搭建的环境,基本是基于linux,hadoop的开发以及文档基本是基于linux的,而且hodoop也不推荐windows作为生产环境。因此作者认为,让平常绝大多数开...
  • 解决windows运行hadoop客户端报错

    千次阅读 2017-06-14 10:05:40
    1、访问hdfs时报权限错误: Permission denied: user=administrator, access=WRITE, inode="/":root:supergroup 解决:配置JVA运行参数 -DHADOOP_USER...2、 Could not locate Hadoop executable: E:\lib\hadoop-2.8.0
  • Windows上安装运行Hadoop

    2018-05-25 23:36:00
    参考博客:《初学hadoop,windows下安装》 1.下载JDK,并设置JAVA_HOME。(使用不带空格的路径,比如Program Files,将在后面的配置中报错!) 假设JAVA_HOME路径为C:\Java\jdk1.8.0_73 2.下载ha...
  • 解决方案: Problems running Hadoop on...Hadoop requires native libraries on Windows to work properly -that includes to access the file:// filesystem, where Hadoop uses some Windows APIs to implement ...

空空如也

空空如也

1 2 3 4 5 ... 20
收藏数 1,958
精华内容 783
关键字:

windows运行hadoop