最新文章专题视频专题问答1问答10问答100问答1000问答2000关键字专题1关键字专题50关键字专题500关键字专题1500TAG最新视频文章推荐1 推荐3 推荐5 推荐7 推荐9 推荐11 推荐13 推荐15 推荐17 推荐19 推荐21 推荐23 推荐25 推荐27 推荐29 推荐31 推荐33 推荐35 推荐37视频文章20视频文章30视频文章40视频文章50视频文章60 视频文章70视频文章80视频文章90视频文章100视频文章120视频文章140 视频2关键字专题关键字专题tag2tag3文章专题文章专题2文章索引1文章索引2文章索引3文章索引4文章索引5123456789101112131415文章专题3
当前位置: 首页 - 科技 - 知识百科 - 正文

hadoopwordcount新API例子

来源:动视网 责编:小采 时间:2020-11-09 13:18:37
文档

hadoopwordcount新API例子

hadoopwordcount新API例子:准备 准备一些输入文件,可以用hdfs dfs -put xxx/*/user/fatkun/input上传文件 代码 package com.fatkun;import java.io.IOException;import java.util.ArrayList;import java.util.List;import java.u
推荐度:
导读hadoopwordcount新API例子:准备 准备一些输入文件,可以用hdfs dfs -put xxx/*/user/fatkun/input上传文件 代码 package com.fatkun;import java.io.IOException;import java.util.ArrayList;import java.util.List;import java.u


准备 准备一些输入文件,可以用hdfs dfs -put xxx/*?/user/fatkun/input上传文件 代码 package com.fatkun;?import java.io.IOException;import java.util.ArrayList;import java.util.List;import java.util.StringTokenizer;?import org.apache.commons.lo

准备

准备一些输入文件,可以用hdfs dfs -put xxx/*?/user/fatkun/input上传文件

代码

package com.fatkun;
?
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import java.util.StringTokenizer;
?
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;
?
public class WordCount extends Configured implements Tool {
 static enum Counters {
 INPUT_WORDS // 计数器
 } 
?
 static Log logger = LogFactory.getLog(WordCount.class);
?
 public static class CountMapper extends
 Mapper {
 private final IntWritable one = new IntWritable(1);
 private Text word = new Text();
 private boolean caseSensitive = true;
?
 @Override
 protected void setup(Context context) throws IOException,
 InterruptedException {
 // 读取配置
 Configuration conf = context.getConfiguration();
 caseSensitive = conf.getBoolean("wordcount.case.sensitive", true);
 super.setup(context);
 }
?
 @Override
 protected void map(LongWritable key, Text value, Context context)
 throws IOException, InterruptedException {
 StringTokenizer itr = new StringTokenizer(value.toString());
 while (itr.hasMoreTokens()) {
 if (caseSensitive) { // 是否大小写敏感
 word.set(itr.nextToken());
 } else {
 word.set(itr.nextToken().toLowerCase());
 }
 context.write(word, one);
 context.getCounter(Counters.INPUT_WORDS).increment(1);
 }
 }
 }
?
 public static class CountReducer extends
 Reducer {
?
 @Override
 protected void reduce(Text text, Iterable values,
 Context context) throws IOException, InterruptedException {
 int sum = 0;
 for (IntWritable value : values) {
 sum += value.get();
 }
 context.write(text, new IntWritable(sum));
 }
?
 }
?
 @Override
 public int run(String[] args) throws Exception {
 Configuration conf = new Configuration(getConf());
 Job job = Job.getInstance(conf, "Example Hadoop WordCount");
 job.setJarByClass(WordCount.class);
 job.setMapperClass(CountMapper.class);
 job.setCombinerClass(CountReducer.class);
 job.setReducerClass(CountReducer.class);
?
 job.setOutputKeyClass(Text.class);
 job.setOutputValueClass(IntWritable.class);
?
 List other_args = new ArrayList();
 for (int i = 0; i < args.length; ++i) {
 other_args.add(args[i]);
 }
?
 FileInputFormat.addInputPath(job, new Path(other_args.get(0)));
 FileOutputFormat.setOutputPath(job, new Path(other_args.get(1)));
 int ret = job.waitForCompletion(true) ? 0 : 1;
?
 long inputWord = job.getCounters().findCounter(Counters.INPUT_WORDS)
 .getValue();
 System.out.println("INPUT_WORDS:" + inputWord);
 logger.info("test log: " + inputWord);
 return ret;
 }
?
 public static void main(String[] args) throws Exception {
 int res = ToolRunner.run(new Configuration(), new WordCount(), args);
?
 System.exit(res);
 }
?
}

运行

在eclipse导出jar包,执行以下命令

hadoop jar wordcount.jar com.fatkun.WordCount -Dwordcount.case.sensitive=false /user/fatkun/input /user/fatkun/output

参考

http://cxwangyi.blogspot.com/2009/12/wordcount-tutorial-for-hadoop-0201.html

http://hadoop.apache.org/docs/r1.2.1/mapred_tutorial.html#Example%3A+WordCount+v2.0

文档

hadoopwordcount新API例子

hadoopwordcount新API例子:准备 准备一些输入文件,可以用hdfs dfs -put xxx/*/user/fatkun/input上传文件 代码 package com.fatkun;import java.io.IOException;import java.util.ArrayList;import java.util.List;import java.u
推荐度:
标签: API 一些 例子
  • 热门焦点

最新推荐

猜你喜欢

热门推荐

专题
Top