I've never had good experiences with automated serialisation -- even though it sounds like other people do it with success. What's the secret?
To give you a flavour of the kind of poblem: In C# (or rather .net) json.net reads JSON and calls setters from the target class.
That means the setters have to be public, and you don't know what order they will be called in, and you have no real signal about when it is all done. The constructor is no longer enough to guarantee the object's invariants are met.
It sounds like you're parsing the JSON straight into your business objects, which is the source of the problem. You need an intermediate class which represents a strongly-typed version of the JSON message. So JSON.net goes from a JSON string into this message object, then you write your own code (or, if it works for you, use a tool like automapper), to go from that into your business classes.
This is what I settled on -- at least in the hard cases. And if I understand his acronyms, it's also what @mythz is recommending.
Perhaps I should have done it for the easy cases as well (where the business objects are struct-like enough that it doesn't matter) and just lived with the boilerplate.
But I see little advantage in this over just having a dictionary that I can inspect to initialise my real business object. True that is not strongly-typed, but the stage between the message-object and the business object can have validation errors anyway, so why not treat typechecking as part of that?
In Java land with Jackson/Gson they can use the getters/setters or reflection and find the private fields. The only time it is not completely automatic is when some json object is mixed cased myField1 and my_field1. Even then, just adding an annotation fixes it. For any special formats, for example iso8601 dates, you can quickly define a serializer/deserializer and be done.
Is it really that hard in c#? It is not something I ever think about in Java.
Even beyond that, Jackson can use a private constructor if you use the @JsonCreator annotation on the constructor and @JsonProperty annotations on each parameter.
Yes, the automatic serialization is not a solution for the most pressing problems presented by the article -- it's just the first thing from all the things that has to be done at the boundary.
You have some DTO class that is your system typed idea about the structure of the JSON -- this class is quite useful as an implicit documentation, but it really has to stay internal to the boundary. You will use an autodeserializer to such class and then you will continue by constructing real object from deserialized data that can be presented to the rest of the application. During such construction you can validate state and return errros.
This step can be eased by some validating attributes on the boundary DTO properties, but there is always some custom logic that describes what is acceptable and what is not.
I have nothing good to say about COM, but I'm seriously thinking about gRPC [0] to get away from the sloppy json endpoints we code around today, at work. Before I dive in I would love to hear, what it is that makes that architecture a bad one.
To give you a flavour of the kind of poblem: In C# (or rather .net) json.net reads JSON and calls setters from the target class.
That means the setters have to be public, and you don't know what order they will be called in, and you have no real signal about when it is all done. The constructor is no longer enough to guarantee the object's invariants are met.
Most awkward.