Tuesday, July 15, 2008

About the 10 year plan

Through Stefan Titkov I came across this comment by Benjamin Carlyle. Benjamin says :

"Perhaps the clearest contrast is between REST and a strongly-typed O-O language. In nice, crinkly java I get a compile error whenever two components disagree on the definition of an interface. The client tries to call method foo. There is no method foo. Bail. In this setting it makes sense to leverage the strongly-typed nature of the language to make sure you don't release junk. You define interfaces in a detailed and domain-specific way that is checked to yield a consistent releasable whole."

I briefly touched on it before but given that an interface with more than 4 methods is deemed unsuitable on the Web (for all the reasons associated by REST advocates with this approach) why does it seem so natural and acceptable to do OO finer interfaces in a single JVM ? Will it be the ultimate WEB experience everywhere if there were only generic methods everywhere, on the WEN and in virtual machines ? The only difference would be that when programming remote services clients would need to be able to catch up remote exceptions and handle them as needed ?

This is probably a lot of nonsense. But I hope it captures what seems like one the main reasons behind a 'split' between RESTful and WebServices communities : for some it's natural to program in terms of generic interfaces, some will eventually accept it, for others it's simply a non-starter and more specific interfaces are on the map - with the proper data extensibility policy they can fare pretty well too.

Benjamin then says :

In REST, I want the client or the server I deployed literally 10 years ago to work with whatever I am putting out today. I also want whatever I'm putting out today to work with every bit of code written since that time, and every bit of code that will be written in the next 10 years. "

I think I've read on one of the
Benjamin's blogs that in 10 years time it will be all about REST. I have to admit that prediction may be much closer to the reality as people are getting behind REST more and more.

The one quoted above is basically unrealistic IMHO. With one exception only. If you're a client doing GET. I've always believed that GET is one of the main driving force behind the Web (plus links to URIs which can be GETed). Doing POSTs, PUTs, etc, in 10 years time ? I doubt - but I'm happy to be eventually proven wrong.


fuzzyBSc said...

I'm not quite sure I understand your response, so perhaps I was not completely clear in my original comment.

I meant to say that while a strongly-typed O-O uses a compiler to ensure consistency, that a distributed architecture with components that can't be upgraded all at once relies on consistency being there before its components start communicating.

In Java it is good to have detailed types, as this allows you to lean on the compiler to check consistency of components/classes within your (say) JVM. Code breaks when you change your rules, and you fix it.

In REST, uniformity serves to achieve the consistency, and to allow the set of rules to evolve gently over time. You can't fix the code that was written in one of the old ways, but you can continue to work with it. The distributed interfaces are hopefully already less closely coupled than the ones you see within a process, so you already have a leg-up.

Certainly GET is the main driving force behind the Web. It lets any component participate in a standard interaction to move data from server to client based on a URL. Combined with a well-controlled set of document types, the client can be prepared ahead of time to process any kind of document a server of its era may produce. Combined with content negotiation and/or must-ignore semantics, future servers can continue to interoperate with this client.

PUT is the natural next step for machine interaction, and exhibits similar properties. Pub/sub is also a natural step forward for enterprise systems. The lack of intricate types in REST means that "patterns" are usually encapsulated in a concrete protocol interaction with a clear specification somewhere.

For the record: I don't see REST fundamentally replacing O-O. However, I can see the day coming where O-O is effectively encapsulated within REST for large systems, just as O-O now encapsulates the structured programming model that proceeded it. REST at the network level, O-O at the object level, structured at the method level. Non-REST SOA may still fill a gap at the network for small systems, and hopefully some level of protocol harmonisation can be achieved between the REST and non-REST SOA to allow small networks to do the practical things they want to do at this level without reinventing any wheels.


Sergey Beryozkin said...

Thanks for the comment.

What I was trying to say is this :

Most likely, only if you have a generic client (ATOM readers, browsers, etc) then in 10 years time it would still work.

It would likely work for dedicated client applications too, though the possibility of a success is less due to the fact that the meaning of retrieved data may have changed in 10 years or so.

The reason I doubt other main verbs will work consistently in 10 years time for a given application is that there are mostly about providing some information to the server and I'm not quite sure what does it mean to write such an application which will function properly when a client written 10 year ago PUTs/POSTs/DELETEs on a new evolved application ?

I suppose it will probably work for requests like 'POST your comment', this kind of request may be supported for a long long time. But what about 'loan request' ? The actual rules for a loan application will have changed. This is a classical problem I suppose - but hopefully it's clearer now what I mean.

Purely from a backward/forward compatibility side, the 'extreme' measures used by Java for ex (API never changes), which is the same for REST ((mostly)same interface) for any RESTful service, can be used to achieve great results for RESTful or even SOAP-based services, all coupled with a good policy for evolving the actual data formats.

But there's more to it, which is the actual meaning of data - that's why I'm saying that generic GET clients will likely work reliably in 10-15 years of time