ThinkChat🤖让你学习和工作更高效,注册即送10W Token,即刻开启你的AI之旅 广告
[TOC] # 自定义函数 当Hive提供的内置函数无法满足你的业务处理需要时,此时就可以考虑使用用户自定义函数(`UDF:user-defined function`) # 自定义函数类别 * UDF 一进一出.作用于单个数据行,产生一个数据行作为输出(数学函数,字符串函数). * UDAF 多进一出.(用户定义聚集函数):接收多个输入数据行,并产生一个输出数据行(count,max).聚集函数.类似于count/max/min * UDTF 一进多出. # UDF开发实例 ## 编程步骤 1. 继承org.apache.hadoop.hive.ql.UDF 2. 需要实现evaluate函数,evaluate函数支持重载 3. 在hive的命令行窗口创建函数: a. 添加jar,`add jar linux_jar_path` b. 创建`function create [temporary] function [dbname] function_name AS class_name;` 4. 在hive的命令行窗口删除函数 `Drop [temporary] function [if exists] [dbname] function_name.` ## 简单UDF示例 1. 先开发一个java类,继承UDF,并重载evaluate方法 ~~~ import org.apache.hadoop.hive.ql.exec.UDF; import org.apache.hadoop.io.Text; public final class Lower extends UDF{ public Text evaluate(final Text s){ if(s==null){return null;} return new Text(s.toString().toLowerCase()); } } ~~~ 2. 打成jar包上传到服务器 3. 将jar包添加到hive的classpath ~~~ hive>add JAR /home/hadoop/udf.jar; ~~~ 4. 创建临时函数与开发好的java class关联 ~~~ Hive>create temporary function tolowercase as 'cn.bigdata.Lower'; ~~~ 5. 即可在hql中使用自定义的函数tolowercase ip  ~~~ select tolowercase(name),age from t_test; ~~~ **自定义函数只在当前session有用** ## Json数据解析UDF开发 有原始json数据如下: ~~~ {"movie":"1193","rate":"5","timeStamp":"978300760","uid":"1"} {"movie":"661","rate":"3","timeStamp":"978302109","uid":"1"} {"movie":"914","rate":"3","timeStamp":"978301968","uid":"1"} {"movie":"3408","rate":"4","timeStamp":"978300275","uid":"1"} {"movie":"2355","rate":"5","timeStamp":"978824291","uid":"1"} {"movie":"1197","rate":"3","timeStamp":"978302268","uid":"1"} {"movie":"1287","rate":"5","timeStamp":"978302039","uid":"1"} ~~~ 需要将数据导入到hive数据仓库中 ~~~ create table rat_json(line string) row format delimited; ~~~ ~~~ load data local inpath '/root/rating.json' into table rat_json; ~~~ 我不管你中间用几个表,最终我要得到一个结果表: ~~~ movie rate timestamp uid 1197 3 978302268 1 ~~~ 注:全在hive中完成,可以用自定义函数 ~~~ //{"movie":"1721","rate":"3","timeStamp":"965440048","uid":"5114"} public class MovieRateBean { private String movie; private String rate; private String timeStamp; private String uid; public String getMovie() { return movie; } public void setMovie(String movie) { this.movie = movie; } public String getRate() { return rate; } public void setRate(String rate) { this.rate = rate; } public String getTimeStamp() { return timeStamp; } public void setTimeStamp(String timeStamp) { this.timeStamp = timeStamp; } public String getUid() { return uid; } public void setUid(String uid) { this.uid = uid; } @Override public String toString() { return movie + "\t" + rate + "\t" + timeStamp + "\t" + uid; } } ~~~ ~~~ import java.io.IOException; import org.apache.hadoop.hive.ql.exec.UDF; import parquet.org.codehaus.jackson.JsonParseException; import parquet.org.codehaus.jackson.map.JsonMappingException; //import org.codehaus.jackson.map.ObjectMapper; import parquet.org.codehaus.jackson.map.ObjectMapper; public class JsonParser extends UDF { public String evaluate(String jsonLine) { //可以用这个类帮我们解析json ObjectMapper objectMapper = new ObjectMapper(); try { MovieRateBean bean = objectMapper.readValue(jsonLine, MovieRateBean.class); return bean.toString(); } catch (Exception e) { } return ""; } } ~~~ 然后把这个函数打成jar包并上传上去 添加jar包 ~~~ add JAR /root/hiveStudy/jsonParser.jar; ~~~ 创建临时函数 ~~~ create temporary function parsejson as 'com.hive.JsonParser'; ~~~ ~~~ select parsejson(line) from rat_json limit 10; ~~~ 出现这个错误,就是函数写错了,或者打包出错了 ~~~ SemanticException [Error 10014]: Line 1:7 Wrong arguments 'line': No matching method for class com.hive.ParseJson with (string). Possible choices: ~~~ ## 内置解析json `get_json_object` ~~~ hive> select * from rat_json limit 10; OK {"movie":"1193","rate":"5","timeStamp":"978300760","uid":"1"} {"movie":"661","rate":"3","timeStamp":"978302109","uid":"1"} {"movie":"914","rate":"3","timeStamp":"978301968","uid":"1"} {"movie":"3408","rate":"4","timeStamp":"978300275","uid":"1"} {"movie":"2355","rate":"5","timeStamp":"978824291","uid":"1"} {"movie":"1197","rate":"3","timeStamp":"978302268","uid":"1"} {"movie":"1287","rate":"5","timeStamp":"978302039","uid":"1"} {"movie":"2804","rate":"5","timeStamp":"978300719","uid":"1"} {"movie":"594","rate":"4","timeStamp":"978302268","uid":"1"} {"movie":"919","rate":"4","timeStamp":"978301368","uid":"1"} Time taken: 0.131 seconds, Fetched: 10 row(s) hive> select get_json_object(line,'$.movie') as moive,get_json_object(line,'$.rate') as rate from rat_json limit 10; OK 1193 5 661 3 914 3 3408 4 2355 5 1197 3 1287 5 2804 5 594 4 919 4 Time taken: 0.108 seconds, Fetched: 10 row(s) ~~~ # 函数的有效区域 ## 临时函数 1. 自定义UDF需要继承org.apache.hadoop.hive.ql.UDF。 2. 需要实现evaluate函数,evaluate函数支持重载。(注意:在一个库中创建的UDF函数,不能在另一个库中使用 ) 3. 把程序打包放到目标机器上去; 4. 进入hive客户端,添加jar包:`hive>add jar /run/jar/udf_test.jar;` 5. 创建临时函数:`hive>create temporary function add_example AS 'hive.udf.Add';` 不加temporary就是永久 6. 查询HQL语句: ~~~ SELECT add_example(8, 9) FROM scores; SELECT add_example(scores.math, scores.art) FROM scores; SELECT add_example(6, 7, 8, 6.8) FROM scores; ~~~ 7. 销毁临时函数:hive> DROP TEMPORARY FUNCTION add_example; 注:**UDF只能实现一进一出的操作,如果需要实现多进一出,则需要实现UDAF** ## 永久函数 1. 自定义UDF需要继承org.apache.hadoop.hive.ql.UDF。(注意该类的包名必须是org.apache.hadoop.hive.ql.udf) 2. 需要实现evaluate函数,evaluate函数支持重载。 3. 修改FunctionRegistry这个类,注册定义的udf 4. 把udf函数编译成class放到hive-exec-0.12.0-cdh5.0.0.jar中org\apache\hadoop\hive\ql\udf 路径下面 5. 将新的FunctionRegistry编译后的class文件替换hive-exec-0.12.0-cdh5.0.0.jar中的org.apache.hadoop.hive.ql.exec # 查看函数 show functions 能够找到这个创建的函数. ~~~ hive> show functions; ~~~ 查看该自定义函数的函数描述: ~~~ hive> desc function cz; ~~~ 没有描述的话,是因为自定义这个函数的时候没有写 在jar源代码包中看关于hive内置year函数的源码 ![](https://box.kancloud.cn/a1bbe6e1f403e930f3b90d182a75d033_706x308.png) 可以仿照这个内置的函数在编写自定义hive函数的时候,定义函数的描述description. year函数在hive中的描述 ![](https://box.kancloud.cn/b31955fd2fb07af9446e2ad5668d8526_716x270.png)