hadoop版本:
$ hadoop versionHadoop 0.20.2-cdh3u4Subversion git://ubuntu-slave01/var/lib/jenkins/workspace/CDH3u4-Full-RC/build/cdh3/hadoop20/0.20.2-cdh3u4/source -r 214dd731e3bdb687cb55988d3f47dd9e248c5690Compiled by jenkins on Mon May 7 13:01:39 PDT 2012From source with checksum a60c9795e41a3248b212344fb131c12c
根据版本的不同采用的实现写法略有不同,此处采用的版本详情如下:
org.apache.mrunit mrunit 1.0.0 hadoop1
其中常用的类如下:
org.apache.hadoop.mrunit.mapreduce.MapDriver;org.apache.hadoop.mrunit.mapreduce.MapReduceDriver;org.apache.hadoop.mrunit.mapreduce.ReduceDriver;
mapper,combiner和reducer实现的含义描述如下:
CompMapper:把222-333##id1##id2 处理成key为id1##id2,value为1L(出现一次)CompCombiner:对相同的key进行累加CompReducer:吧key为id1##id2,value为long类型的数据进行累加然后除以某一个定值,以double的形式输出
测试mapper,combiner和reducer的代码如下
private MapDrivermapDriver;private ReduceDriver reduceDriver;private ReduceDriver combinerDriver;private MapReduceDriver mapCombinerDriver;private MapReduceDriver mapReducerDriver;@Beforepublic void setUp() { CompMapper mapper = new CompMapper(); CompCombiner combiner = new CompCombiner(); CompReducer reducer = new CompReducer(); mapDriver = new MapDriver (mapper); reduceDriver = new ReduceDriver (reducer); combinerDriver = new ReduceDriver (combiner); mapCombinerDriver = new MapReduceDriver ( mapper, combiner); mapReducerDriver = new MapReduceDriver ( mapper, reducer);}@Testpublic void testMapper() throws IOException { mapDriver.setInput(new Text("222-333##id1##id2"), new LongWritable(1L)); mapDriver.withOutput(new Text("id1##id2"), new LongWritable(1L)); mapDriver.runTest();}@Testpublic void testCombiner() throws IOException { List values = new ArrayList (); for (int i = 0; i < 5; i++) { values.add(new LongWritable(NumberUtils.toLong(i + ""))); } combinerDriver.addInput(new Text("id1##id2"), values); combinerDriver.withOutput(new Text("id1##id2"), new LongWritable(10L)); combinerDriver.runTest();}@Testpublic void testReducer() throws IOException { List values = new ArrayList (); long count = 0; for (int i = 0; i < 5; i++) { count = count + (long) i; values.add(new LongWritable(NumberUtils.toLong(i + ""))); } reduceDriver.addInput(new Text("id1##id2"), values); int numHash = reduceDriver.getConfiguration().getInt( MinhashOptionCreator.NUM_HASH_FUNCTIONS, 10); DoubleWritable dw = new DoubleWritable(); BigDecimal b1 = new BigDecimal(count); BigDecimal b2 = new BigDecimal(numHash); dw.set(b1.divide(b2).doubleValue()); reduceDriver.withOutput(new Text("id1##id2"), dw); reduceDriver.runTest();}@Testpublic void tetMapCombiner() throws IOException { mapCombinerDriver.addInput(new Text("222-333##id1##id2"), new LongWritable(1L)); mapCombinerDriver.addInput(new Text("111-333##id1##id2"), new LongWritable(1L)); mapCombinerDriver.withOutput(new Text("id1##id2"), new LongWritable(2L)); mapCombinerDriver.runTest();}@Testpublic void tetMapReducer() throws IOException { mapReducerDriver.addInput(new Text("222-333##id1##id2"), new LongWritable(1L)); mapReducerDriver.addInput(new Text("111-333##id1##id2"), new LongWritable(1L)); int numHash = reduceDriver.getConfiguration().getInt( "NUM", 10); DoubleWritable dw = new DoubleWritable(); BigDecimal b1 = new BigDecimal(2L); BigDecimal b2 = new BigDecimal(numHash); dw.set(b1.divide(b2).doubleValue()); mapReducerDriver.withOutput(new Text("id1##id2"), dw); mapReducerDriver.runTest();}
注意事宜:
1.MRUnit与Hadoop的版本对应关系2.如果报java.lang.IncompatibleClassChangeError错那么就是版本的问题