最近在使用一个内部的RPC框架时,发现若是使用Object类型,实际类型为BigDecimal的时候,做为传输对象的时候,会出现丢失精度的问题;好比在序列化前为金额1.00,反序列化以后为1.0,自己值可能没有影响,可是在有些强依赖金额的地方,会出现问题;git
查看源码发现RPC框架默认使用的序列化框架为Jackson,那简单,看一下本地是否能够重现问题;github
public class Bean1 { private String p1; private BigDecimal p2; ...省略get/set... } public class Bean2 { private String p1; private Object p2; ...省略get/set... }
为了更好的看出问题,分别准备了2个bean;json
public class JKTest { public static void main(String[] args) throws IOException { ObjectMapper mapper = new ObjectMapper(); Bean1 bean1 = new Bean1("haha1", new BigDecimal("1.00")); Bean2 bean2 = new Bean2("haha2", new BigDecimal("2.00")); String bs1 = mapper.writeValueAsString(bean1); String bs2 = mapper.writeValueAsString(bean2); System.out.println(bs1); System.out.println(bs2); Bean1 b1 = mapper.readValue(bs1, Bean1.class); System.out.println(b1.toString()); Bean2 b22 = mapper.readValue(bs2, Bean2.class); System.out.println(b22.toString()); } }
分别对Bean1和Bean2进行序列化和反序列化操做,而后查看结果;app
{"p1":"haha1","p2":1.00} {"p1":"haha2","p2":2.00} Bean1 [p1=haha1, p2=1.00] Bean2 [p1=haha2, p2=2.0]
结果能够发现两个问题:
1.在序列化的时候2个bean都没有问题;
2.重现了问题,Bean2在反序列化时,p2出现了精度丢失的问题;框架
经过一步一步查看Jackson源码,最终定位到UntypedObjectDeserializer的Vanilla内部类中,反序列方法以下:源码分析
public Object deserialize(JsonParser p, DeserializationContext ctxt) throws IOException { switch (p.getCurrentTokenId()) { case JsonTokenId.ID_START_OBJECT: { JsonToken t = p.nextToken(); if (t == JsonToken.END_OBJECT) { return new LinkedHashMap<String,Object>(2); } } case JsonTokenId.ID_FIELD_NAME: return mapObject(p, ctxt); case JsonTokenId.ID_START_ARRAY: { JsonToken t = p.nextToken(); if (t == JsonToken.END_ARRAY) { // and empty one too if (ctxt.isEnabled(DeserializationFeature.USE_JAVA_ARRAY_FOR_JSON_ARRAY)) { return NO_OBJECTS; } return new ArrayList<Object>(2); } } if (ctxt.isEnabled(DeserializationFeature.USE_JAVA_ARRAY_FOR_JSON_ARRAY)) { return mapArrayToArray(p, ctxt); } return mapArray(p, ctxt); case JsonTokenId.ID_EMBEDDED_OBJECT: return p.getEmbeddedObject(); case JsonTokenId.ID_STRING: return p.getText(); case JsonTokenId.ID_NUMBER_INT: if (ctxt.hasSomeOfFeatures(F_MASK_INT_COERCIONS)) { return _coerceIntegral(p, ctxt); } return p.getNumberValue(); // should be optimal, whatever it is case JsonTokenId.ID_NUMBER_FLOAT: if (ctxt.isEnabled(DeserializationFeature.USE_BIG_DECIMAL_FOR_FLOATS)) { return p.getDecimalValue(); } return p.getNumberValue(); case JsonTokenId.ID_TRUE: return Boolean.TRUE; case JsonTokenId.ID_FALSE: return Boolean.FALSE; case JsonTokenId.ID_END_OBJECT: // 28-Oct-2015, tatu: [databind#989] We may also be given END_OBJECT (similar to FIELD_NAME), // if caller has advanced to the first token of Object, but for empty Object return new LinkedHashMap<String,Object>(2); case JsonTokenId.ID_NULL: // 08-Nov-2016, tatu: yes, occurs return null; //case JsonTokenId.ID_END_ARRAY: // invalid default: } return ctxt.handleUnexpectedToken(Object.class, p); }
在Bean2中的p2是一个Object类型,因此Jackson中给定的反序列化类为UntypedObjectDeserializer,这个比较容易理解;而后根据具体的数据类型,调用不用的读取方法;由于json这种序列化方式,除了数据,自己并无存放具体的数据类型,全部这里Jackson认定2.00为一个ID_NUMBER_FLOAT类型,在这个case下面有2个选择,默认是直接调用getNumberValue()方法,这种状况会丢失精度,返回结果为2.0;或者开启使用USE_BIG_DECIMAL_FOR_FLOATS特性,问题解决也很简单,使用此特性便可;测试
ObjectMapper mapper = new ObjectMapper(); mapper.enable(DeserializationFeature.USE_BIG_DECIMAL_FOR_FLOATS);
再次测试,能够发现结果以下:ui
{"p1":"haha1","p2":1.00} {"p1":"haha2","p2":2.00} Bean1 [p1=haha1, p2=1.00] Bean2 [p1=haha2, p2=2.00]
Jackson自己提供了对序列化和反序列扩展的功能,对应特殊的Bean能够本身定义反序列类,好比针对Bean2,能够实现Bean2Deserializer,而后在ObjectMapper进行注册this
ObjectMapper mapper = new ObjectMapper(); SimpleModule desModule = new SimpleModule("testModule"); desModule.addDeserializer(Bean2.class, new Bean2Deserializer(Bean2.class)); mapper.registerModule(desModule);
Json自己并无存放数据类型,只有数据自己,那应该类Json的序列化方式应该都存在此问题;spa
准备测试代码以下:
public class FJTest { public static void main(String[] args) { Bean1 bean1 = new Bean1("haha1", new BigDecimal("1.00")); Bean2 bean2 = new Bean2("haha2", new BigDecimal("2.00")); String jsonString1 = JSON.toJSONString(bean1); String jsonString2 = JSON.toJSONString(bean2); System.out.println(jsonString1); System.out.println(jsonString2); Bean1 bean11 = JSON.parseObject(jsonString1, Bean1.class); Bean2 bean22 = JSON.parseObject(jsonString2, Bean2.class); System.out.println(bean11.toString()); System.out.println(bean22.toString()); } }
结果以下:
{"p1":"haha1","p2":1.00} {"p1":"haha2","p2":2.00} Bean1 [p1=haha1, p2=1.00] Bean2 [p1=haha2, p2=2.00]
能够发现FastJson并不存在此问题,查看源码,定位到DefaultJSONParser的parse方法,部分代码以下:
public Object parse(Object fieldName) { final JSONLexer lexer = this.lexer; switch (lexer.token()) { case SET: lexer.nextToken(); HashSet<Object> set = new HashSet<Object>(); parseArray(set, fieldName); return set; case TREE_SET: lexer.nextToken(); TreeSet<Object> treeSet = new TreeSet<Object>(); parseArray(treeSet, fieldName); return treeSet; case LBRACKET: JSONArray array = new JSONArray(); parseArray(array, fieldName); if (lexer.isEnabled(Feature.UseObjectArray)) { return array.toArray(); } return array; case LBRACE: JSONObject object = new JSONObject(lexer.isEnabled(Feature.OrderedField)); return parseObject(object, fieldName); case LITERAL_INT: Number intValue = lexer.integerValue(); lexer.nextToken(); return intValue; case LITERAL_FLOAT: Object value = lexer.decimalValue(lexer.isEnabled(Feature.UseBigDecimal)); lexer.nextToken(); return value; case LITERAL_STRING: String stringLiteral = lexer.stringVal(); lexer.nextToken(JSONToken.COMMA); if (lexer.isEnabled(Feature.AllowISO8601DateFormat)) { JSONScanner iso8601Lexer = new JSONScanner(stringLiteral); try { if (iso8601Lexer.scanISO8601DateIfMatch()) { return iso8601Lexer.getCalendar().getTime(); } } finally { iso8601Lexer.close(); } } return stringLiteral; case NULL: lexer.nextToken(); return null; case UNDEFINED: lexer.nextToken(); return null; case TRUE: lexer.nextToken(); return Boolean.TRUE; case FALSE: lexer.nextToken(); return Boolean.FALSE; ...省略... }
相似jackson的方式,根据不一样的类型作不一样的数据处理,一样2.00也被认为是float类型,一样须要检测是否开启Feature.UseBigDecimal特性,只不过FastJson默认开启了此功能;
下面再来看一个非Json类序列化方式,看protostuff是若是处理此种问题的;
准备测试代码以下:
@SuppressWarnings("unchecked") public class PBTest { public static void main(String[] args) { Bean1 bean1 = new Bean1("haha1", new BigDecimal("1.00")); Bean2 bean2 = new Bean2("haha2", new BigDecimal("2.00")); LinkedBuffer buffer1 = LinkedBuffer.allocate(LinkedBuffer.DEFAULT_BUFFER_SIZE); Schema schema1 = RuntimeSchema.createFrom(bean1.getClass()); byte[] bytes1 = ProtostuffIOUtil.toByteArray(bean1, schema1, buffer1); Bean1 bean11 = new Bean1(); ProtostuffIOUtil.mergeFrom(bytes1, bean11, schema1); System.out.println(bean11.toString()); LinkedBuffer buffer2 = LinkedBuffer.allocate(LinkedBuffer.DEFAULT_BUFFER_SIZE); Schema schema2 = RuntimeSchema.createFrom(bean2.getClass()); byte[] bytes2 = ProtostuffIOUtil.toByteArray(bean2, schema2, buffer2); Bean2 bean22 = new Bean2(); ProtostuffIOUtil.mergeFrom(bytes2, bean22, schema2); System.out.println(bean22.toString()); } }
结果以下:
Bean1 [p1=haha1, p2=1.00] Bean2 [p1=haha2, p2=2.00]
能够发现Protostuff也不存在此问题,缘由是由于Protostuff在序列化的时候就将类型等信息存放在二进制中,不一样的类型给定了不一样的标识,RuntimeFieldFactory列出了全部标识:
public abstract class RuntimeFieldFactory<V> implements Delegate<V> { static final int ID_BOOL = 1, ID_BYTE = 2, ID_CHAR = 3, ID_SHORT = 4, ID_INT32 = 5, ID_INT64 = 6, ID_FLOAT = 7, ID_DOUBLE = 8, ID_STRING = 9, ID_BYTES = 10, ID_BYTE_ARRAY = 11, ID_BIGDECIMAL = 12, ID_BIGINTEGER = 13, ID_DATE = 14, ID_ARRAY = 15, // 1-15 is encoded as 1 byte on protobuf and // protostuff format ID_OBJECT = 16, ID_ARRAY_MAPPED = 17, ID_CLASS = 18, ID_CLASS_MAPPED = 19, ID_CLASS_ARRAY = 20, ID_CLASS_ARRAY_MAPPED = 21, ID_ENUM_SET = 22, ID_ENUM_MAP = 23, ID_ENUM = 24, ID_COLLECTION = 25, ID_MAP = 26, ID_POLYMORPHIC_COLLECTION = 28, ID_POLYMORPHIC_MAP = 29, ID_DELEGATE = 30, ID_ARRAY_DELEGATE = 32, ID_ARRAY_SCALAR = 33, ID_ARRAY_ENUM = 34, ID_ARRAY_POJO = 35, ID_THROWABLE = 52, // pojo fields limited to 126 if not explicitly using @Tag // annotations ID_POJO = 127; ...... }
序列化的时候是已以下格式来存储数据的,以下图所示:
tag里面包含了字段的位置标识,好比第一个字段,第二个字段…,以及类型信息,能够看一下两个bean序列化以后的二进制信息:
10 5 104 97 104 97 49 18 4 49 46 48 48 10 5 104 97 104 97 50 19 98 4 50 46 48 48 20
104 97 104 97 49和104 97 104 97 50分别是:haha1和haha2;49 46 48 48和50 46 48 48分别是1.00和2.00;
Bean2存储的数据量明细比Bean1大,由于Bean2中的p2做为Object存储,须要存储Object的起始标识和结束标识,还须要保存具体的类型信息;
更多能够参考:https://my.oschina.net/OutOfM...
类Json序列化方式自己没有保存数据的类型,因此在反序列时有些类型不能区分,只能经过设置特性的方式来解决,可是json格式有更好的可读性;直接序列化为二进制的方式可读性差点,可是能够将不少信息保存进去,更加完善;
https://github.com/ksfzhaohui...
https://gitee.com/OutOfMemory...