I got interested a bit in Rich Internet Applications enabled with technologies like Flex, Silverlight and JavaFX. So I spotted A JavaFX Christmas Demo . These comments attracted my attention:
Question : Could you describe the problem?
Problem : after snow fall starts, i am having %70-80 CPU usage.
Answer : The snow is currently very inefficient...
I'm sorry for presenting the bits out of context (the demo is actually good), but these comments made me smile and also reminded me of the fact that I'm just missing seeing the real snow. I'm going to Minsk in early January and I do hope to see some snow at home. But even in Minsk there's no guarantee any more about the snow being on the streets. Global warming. Hope the snow will be working just fine this year.
Happy New Year !
Friday, December 28, 2007
Thursday, December 27, 2007
WADL and WS-Policy
From Stefan Titkov I found about a WADding in Jersey post by Marc Hadley about Jersey JSR-311 implementation being able to generate a WADL contract per every individual resource.
The next step is to tell to the consumers of these resources about their additional capabilities : about their different security requirements, about some of their QOS properties which do not even manifest themselves on the wire, and so on and so forth.
The next step is to tell to the consumers of these resources about their additional capabilities : about their different security requirements, about some of their QOS properties which do not even manifest themselves on the wire, and so on and so forth.
Clarification for Jean-Jacques
I wanted to comment on some of the Jean-Jacques Dubray's WS-vs-REST posts for a while, in fact I briefly touched on one of his posts earlier.
After looking at my blog reader's listing this morning I spotted a [SOA] Answer for Sergey post. Or no, I've already had all the answers for the questions I asked earlier :-). Either way, how could I miss this post which was done 2 weeks ago ? I can tell you why : as soon as I see a [SOA] or [REST] prefix, my mental filter tells me just to move on to the next post because I already know what those posts are about : they're about the dark side of REST.
I personally see the value in WS-* but I found that after reading Jean-Jacques's posts I start thinking even more about REST. Not sure why. Probably because these posts remind me about pointless WS-vs-REST debates I read 5-6 years ago. Or may be because people naturally react to somewhat strong opinions : it's the same way I react to people saying REST is just the only way to go.
In this concrete post Jean-Jacques comments on one of my previous posts, where I say :
I don't think it's going to stop people from actually doing REST across the enterprise. I believe one can do. After all, RESTful approach is Turing-complete (I don't remember what exactly does it mean :-) but in this context it means one can do any type of enterprise service with REST).
With all my respect to Jean-Jacques's expertise, I have to say he just missed the point
of the post.
And the point was : if you do a code-generated REST all the way then the next step is to do REST-* for the kind of things one can do with some advanced WS-* specs, thus a do-it-yourself approach can be a reasonable and powerful alternative.
As I said, claiming that it's only with WS-* that one can do some advanced things is equivalent to ignoring the fact that REST is pushing WS-* all the way. Different sorts of issues are needed to be discussed IMHO, like will you really get any of REST benefits if you roll out your own transaction system, what's next after you code-generate your RESTful clients and services, how suitable a fine-grained approach for dealing with resources is for different type of scenarios, etc, etc.
Saying that REST creates strong coupling is not very convincing. Big-Bang versioning approach which indeed can quickly help to see if clients are affected does not seem to be the best thing to follow all the time.
After looking at my blog reader's listing this morning I spotted a [SOA] Answer for Sergey post. Or no, I've already had all the answers for the questions I asked earlier :-). Either way, how could I miss this post which was done 2 weeks ago ? I can tell you why : as soon as I see a [SOA] or [REST] prefix, my mental filter tells me just to move on to the next post because I already know what those posts are about : they're about the dark side of REST.
I personally see the value in WS-* but I found that after reading Jean-Jacques's posts I start thinking even more about REST. Not sure why. Probably because these posts remind me about pointless WS-vs-REST debates I read 5-6 years ago. Or may be because people naturally react to somewhat strong opinions : it's the same way I react to people saying REST is just the only way to go.
In this concrete post Jean-Jacques comments on one of my previous posts, where I say :
I don't think it's going to stop people from actually doing REST across the enterprise. I believe one can do. After all, RESTful approach is Turing-complete (I don't remember what exactly does it mean :-) but in this context it means one can do any type of enterprise service with REST).
With all my respect to Jean-Jacques's expertise, I have to say he just missed the point
of the post.
And the point was : if you do a code-generated REST all the way then the next step is to do REST-* for the kind of things one can do with some advanced WS-* specs, thus a do-it-yourself approach can be a reasonable and powerful alternative.
As I said, claiming that it's only with WS-* that one can do some advanced things is equivalent to ignoring the fact that REST is pushing WS-* all the way. Different sorts of issues are needed to be discussed IMHO, like will you really get any of REST benefits if you roll out your own transaction system, what's next after you code-generate your RESTful clients and services, how suitable a fine-grained approach for dealing with resources is for different type of scenarios, etc, etc.
Saying that REST creates strong coupling is not very convincing. Big-Bang versioning approach which indeed can quickly help to see if clients are affected does not seem to be the best thing to follow all the time.
Friday, December 21, 2007
About Validation By Projection
David Orchard explains the idea behind Validation By Projection .
For example, Java JAXB 2.0 allows to ignore unknown elements, but only if no validation is enabled. Runtime Processing section, when describing how to process EIIs, has this to say :
If validation is on (i.e. Unmarshaller.getSchema() is not null), then
report a javax.xml.ValidationEvent. Otherwise, this will cause any
unknown elements to be ignored.
That is, you either enable the validation and hence break on seeing ignorable unknown elements or you disable the validation but have to do some sort of internal validation on your own.
Perhaps expecting the code itself to check that all the right data is in as expected is not that bad an idea at all :-). Thus one of the possible approaches is indeed to just disable the schema-level validation, as suggested in one of the comments for a Solving the XSD Versioning Problem post.
Indeed, using XPath or XSLT to just pick the right data is a simplest way to do your own validation by projection.
But an alarm bell rings in my head. I like XPath and XSLT a lot but, hmm, I need to do it all myself then :-). Which, in fact, may be the best thing in general, but still ... :-). Also, writing an XML Schema instance allowing for the forward compatibility is not a trivial exercise at all.
Hopefully, with the arrival of XML Schema 1.1 things will become simpler in this regard. Reading a schema data structures specification is not for faint-hearted, so I searched for a 'validation' phrase and this text was found :
Content models may now be defined as "open", which means that elements other than those explicitly named will be accepted during validation
Is it something which can already be achieved right now with RelaxNG ?
For example, Java JAXB 2.0 allows to ignore unknown elements, but only if no validation is enabled. Runtime Processing section, when describing how to process EIIs, has this to say :
If validation is on (i.e. Unmarshaller.getSchema() is not null), then
report a javax.xml.ValidationEvent. Otherwise, this will cause any
unknown elements to be ignored.
That is, you either enable the validation and hence break on seeing ignorable unknown elements or you disable the validation but have to do some sort of internal validation on your own.
Perhaps expecting the code itself to check that all the right data is in as expected is not that bad an idea at all :-). Thus one of the possible approaches is indeed to just disable the schema-level validation, as suggested in one of the comments for a Solving the XSD Versioning Problem post.
Indeed, using XPath or XSLT to just pick the right data is a simplest way to do your own validation by projection.
But an alarm bell rings in my head. I like XPath and XSLT a lot but, hmm, I need to do it all myself then :-). Which, in fact, may be the best thing in general, but still ... :-). Also, writing an XML Schema instance allowing for the forward compatibility is not a trivial exercise at all.
Hopefully, with the arrival of XML Schema 1.1 things will become simpler in this regard. Reading a schema data structures specification is not for faint-hearted, so I searched for a 'validation' phrase and this text was found :
Content models may now be defined as "open", which means that elements other than those explicitly named will be accepted during validation
Is it something which can already be achieved right now with RelaxNG ?
Monday, December 3, 2007
Narrowing down the problem
Steve Loughran has commented on my previous post, thanks. One thing I'd like to clarify that what I was trying to do is not to push back on the criticism of WSDL per se, but to actually narrow down the problem a bit.
If one takes a WSDL document and runs a wsdltojava tool of some sort against it then the link between this WSDL doc and a generated type is obvious. Now, lets try to reverse-engineer it a bit. Suppose we have a generated Customer type which probably relies on some JAXB mechanics, how do we know which description language was used to generate it in the first place ? It may've been generated from WADL, and it's a REST client library which might be depending on it, which I believe was something Steve was describing in his original email. The point I'm trying to make is that while a description language like WSDL makes it easy to do the code generation, IMHO WSDL in itself is the not the only party to blame so to say...
About Ant vs Maven. I personally still like Ant because I know it better. And by comparing Ant vs Maven and REST vs WS* debates, the only thing I wanted to say that when proponents of competing technologies/approaches discuss pros and cons of their technologies of choice their arguments seem somewhat familiar :-) And yes, I agree, the build scenario Steve described is easier to do with Ant.
If one takes a WSDL document and runs a wsdltojava tool of some sort against it then the link between this WSDL doc and a generated type is obvious. Now, lets try to reverse-engineer it a bit. Suppose we have a generated Customer type which probably relies on some JAXB mechanics, how do we know which description language was used to generate it in the first place ? It may've been generated from WADL, and it's a REST client library which might be depending on it, which I believe was something Steve was describing in his original email. The point I'm trying to make is that while a description language like WSDL makes it easy to do the code generation, IMHO WSDL in itself is the not the only party to blame so to say...
About Ant vs Maven. I personally still like Ant because I know it better. And by comparing Ant vs Maven and REST vs WS* debates, the only thing I wanted to say that when proponents of competing technologies/approaches discuss pros and cons of their technologies of choice their arguments seem somewhat familiar :-) And yes, I agree, the build scenario Steve described is easier to do with Ant.
Is it about REST vs WS-* ? No, it's about DIY vs Kits
I'm wondering, am I the last one who has understood this somewhat obvious thing ?
While treading on my code generation path and thinking how would one generate a RESTful
client code properly, I suddenly realized it.
It's not about REST vs WS-*. It's about writing or wiring up all code interfacing the outside world manually vs kits generating it for you. It's about developers skilled and motivated enough, not constrained by various governance rules writing their web services vs developers, quite possibly equally skilled at the least, using the generated code.
It's not about REST vs WS-* at all. Because if one creates a RESTful server by using a generated code with annotations occupying 30% of the actual source then it's hard to see what advantage one can get out of it. Addressability ? How good URIs backed up by a generated code can be ?
Because if one generates a RESTful client then it's not clear how different such a code can be from a typical WS client.
Dealing with WS-* services is mostly about dealing with the code generation. WS-Policy adds an extra dimension to the code generation in that your design-time tool can generate a lot of the boilerplate code by analyzing a given service's requirements presented as WS-Policy expressions. WS-Policy brings some dynamism into the runtime as well but that's off-topic. Writing a WS-* code manually is error-prone. Modern WS-* kits do let developers to plugin their own handlers into the execution path, but kits are mostly in control.
Writing RESTful services such that one can get as much advantages associated with REST as possible is mostly about writing the code yourself. It's not said as a critique here. The thing is, in order to write a service which can truly live up to the promise of REST one needs to take charge, take control. In fact, I'm surprised why so many of us are so thrilled about turning existing Java classes into RESTful services. Developers wishing to write a client code which can survive as many changes on the server as possible would be better off writing as much code of their own as possible.
The more I think about the more I'm getting convinced that arguments like 'RESTful services are better' are really arguments in favor of avoiding the code generation. I do start hearing now people saying that REST does not need WADL or why do you need WS-Policy for REST : they do not deal with generated code, they prefer the freedom of dynamic languages or prefer writing their Java or C# web services code themselves, don't mind writing some code which deals with the security, setting up various handlers.
Does it mean it's wrong promoting RESTful services with the help of WADL or JSR311 ? I don't think so. Many users need to take simple first steps in this respect, only if they appreciate over time that exposing their resources the RESTful way gives their application an edge will they move to the next level. Or they might decide to stay with the generated code if it works for them.
Code-generated RESTful services can be the first step for many developers. Especially for those who are lazy like me :-). What would be the difference then between code-generated RESTful and SOAP clients ? A typical soap client interacts with up to two layers of services : with the service itself and with EPRs returned from that service. Likewise, it's difficult to imagine a generated RESTful client dealing with more than 2 layers (beyond collections and their entry resources for ex). It's not a big deal to write a bit more code on top of it to deal with arbitrary number of nested resources, but perhaps it would make sense in cases like this not to code-generate at all.
WS-* clients get a WS-Policy boost and have client runtimes doing a lot of work for them.
That's why it's easier to see why there's a momentum behind AtomPub in the RESTful community : Atom-powered client libraries will hide some complexity from users, and will help them to address a number of security considerations, etc.
What happens next though ? For example, policies are needed everywhere and there're initiatives in the Atom world for example to address the problem of advertising certain service requirements to potential consumers. How far will the Atom community decide to go ? Will they attempt to try to come up with a policy language which will let Atom collections and entries alike to dynamically advertise their advanced capabilities, something which is possible in the WS-* world ? Or will they decide not to make it more complicated ? Arguably, the advanced capabilities WS-Policy can bring into client runtimes may not be needed for the majority of applications.
It does seem it's not about WS-* vs REST at all. Otherwise, the challenge is how to avoid creating REST-*.
In the end, I'd like to comment briefly on a WADL metamodel published post by Jean-Jacques Dubray as I borrowed a REST-* term from that post. I think once can read a lot of interesting stuff on Jean-Jacques's blog, but I found the conclusion of the post to be a bit emotional :-) :
"Good luck with your enterprise implementation of REST :-)"
I don't think it's going to stop people from actually doing REST across the enterprise. I believe one can do. After all, RESTful approach is Turing-complete (I don't remember what exactly does it mean :-) but in this context it means one can do any type of enterprise service with REST).
As far as as I'm concerned the question is what does it mean to do REST across the enterprise ? Something tells me REST-* may not be the way to do it. But it's a bit too complex to guess it right :-)
While treading on my code generation path and thinking how would one generate a RESTful
client code properly, I suddenly realized it.
It's not about REST vs WS-*. It's about writing or wiring up all code interfacing the outside world manually vs kits generating it for you. It's about developers skilled and motivated enough, not constrained by various governance rules writing their web services vs developers, quite possibly equally skilled at the least, using the generated code.
It's not about REST vs WS-* at all. Because if one creates a RESTful server by using a generated code with annotations occupying 30% of the actual source then it's hard to see what advantage one can get out of it. Addressability ? How good URIs backed up by a generated code can be ?
Because if one generates a RESTful client then it's not clear how different such a code can be from a typical WS client.
Dealing with WS-* services is mostly about dealing with the code generation. WS-Policy adds an extra dimension to the code generation in that your design-time tool can generate a lot of the boilerplate code by analyzing a given service's requirements presented as WS-Policy expressions. WS-Policy brings some dynamism into the runtime as well but that's off-topic. Writing a WS-* code manually is error-prone. Modern WS-* kits do let developers to plugin their own handlers into the execution path, but kits are mostly in control.
Writing RESTful services such that one can get as much advantages associated with REST as possible is mostly about writing the code yourself. It's not said as a critique here. The thing is, in order to write a service which can truly live up to the promise of REST one needs to take charge, take control. In fact, I'm surprised why so many of us are so thrilled about turning existing Java classes into RESTful services. Developers wishing to write a client code which can survive as many changes on the server as possible would be better off writing as much code of their own as possible.
The more I think about the more I'm getting convinced that arguments like 'RESTful services are better' are really arguments in favor of avoiding the code generation. I do start hearing now people saying that REST does not need WADL or why do you need WS-Policy for REST : they do not deal with generated code, they prefer the freedom of dynamic languages or prefer writing their Java or C# web services code themselves, don't mind writing some code which deals with the security, setting up various handlers.
Does it mean it's wrong promoting RESTful services with the help of WADL or JSR311 ? I don't think so. Many users need to take simple first steps in this respect, only if they appreciate over time that exposing their resources the RESTful way gives their application an edge will they move to the next level. Or they might decide to stay with the generated code if it works for them.
Code-generated RESTful services can be the first step for many developers. Especially for those who are lazy like me :-). What would be the difference then between code-generated RESTful and SOAP clients ? A typical soap client interacts with up to two layers of services : with the service itself and with EPRs returned from that service. Likewise, it's difficult to imagine a generated RESTful client dealing with more than 2 layers (beyond collections and their entry resources for ex). It's not a big deal to write a bit more code on top of it to deal with arbitrary number of nested resources, but perhaps it would make sense in cases like this not to code-generate at all.
WS-* clients get a WS-Policy boost and have client runtimes doing a lot of work for them.
That's why it's easier to see why there's a momentum behind AtomPub in the RESTful community : Atom-powered client libraries will hide some complexity from users, and will help them to address a number of security considerations, etc.
What happens next though ? For example, policies are needed everywhere and there're initiatives in the Atom world for example to address the problem of advertising certain service requirements to potential consumers. How far will the Atom community decide to go ? Will they attempt to try to come up with a policy language which will let Atom collections and entries alike to dynamically advertise their advanced capabilities, something which is possible in the WS-* world ? Or will they decide not to make it more complicated ? Arguably, the advanced capabilities WS-Policy can bring into client runtimes may not be needed for the majority of applications.
It does seem it's not about WS-* vs REST at all. Otherwise, the challenge is how to avoid creating REST-*.
In the end, I'd like to comment briefly on a WADL metamodel published post by Jean-Jacques Dubray as I borrowed a REST-* term from that post. I think once can read a lot of interesting stuff on Jean-Jacques's blog, but I found the conclusion of the post to be a bit emotional :-) :
"Good luck with your enterprise implementation of REST :-)"
I don't think it's going to stop people from actually doing REST across the enterprise. I believe one can do. After all, RESTful approach is Turing-complete (I don't remember what exactly does it mean :-) but in this context it means one can do any type of enterprise service with REST).
As far as as I'm concerned the question is what does it mean to do REST across the enterprise ? Something tells me REST-* may not be the way to do it. But it's a bit too complex to guess it right :-)
Subscribe to:
Posts (Atom)