Friday, December 28, 2007

Snow is not working

I got interested a bit in Rich Internet Applications enabled with technologies like Flex, Silverlight and JavaFX. So I spotted A JavaFX Christmas Demo . These comments attracted my attention:

Question : Could you describe the problem?
Problem : after snow fall starts, i am having %70-80 CPU usage.
Answer : The snow is currently very inefficient...

I'm sorry for presenting the bits out of context (the demo is actually good), but these comments made me smile and also reminded me of the fact that I'm just missing seeing the real snow. I'm going to Minsk in early January and I do hope to see some snow at home. But even in Minsk there's no guarantee any more about the snow being on the streets. Global warming. Hope the snow will be working just fine this year.

Happy New Year !

Thursday, December 27, 2007

WADL and WS-Policy

From Stefan Titkov I found about a WADding in Jersey post by Marc Hadley about Jersey JSR-311 implementation being able to generate a WADL contract per every individual resource.

The next step is to tell to the consumers of these resources about their additional capabilities : about their different security requirements, about some of their QOS properties which do not even manifest themselves on the wire, and so on and so forth.

Clarification for Jean-Jacques

I wanted to comment on some of the Jean-Jacques Dubray's WS-vs-REST posts for a while, in fact I briefly touched on one of his posts earlier.

After looking at my blog reader's listing this morning I spotted a [SOA] Answer for Sergey post. Or no, I've already had all the answers for the questions I asked earlier :-). Either way, how could I miss this post which was done 2 weeks ago ? I can tell you why : as soon as I see a [SOA] or [REST] prefix, my mental filter tells me just to move on to the next post because I already know what those posts are about : they're about the dark side of REST.

I personally see the value in WS-* but I found that after reading Jean-Jacques's posts I start thinking even more about REST. Not sure why. Probably because these posts remind me about pointless WS-vs-REST debates I read 5-6 years ago. Or may be because people naturally react to somewhat strong opinions : it's the same way I react to people saying REST is just the only way to go.

In this concrete post Jean-Jacques comments on one of my previous posts, where I say :

I don't think it's going to stop people from actually doing REST across the enterprise. I believe one can do. After all, RESTful approach is Turing-complete (I don't remember what exactly does it mean :-) but in this context it means one can do any type of enterprise service with REST).

With all my respect to Jean-Jacques's expertise, I have to say he just missed the point
of the post.

And the point was : if you do a code-generated REST all the way then the next step is to do REST-* for the kind of things one can do with some advanced WS-* specs, thus a do-it-yourself approach can be a reasonable and powerful alternative.

As I said, claiming that it's only with WS-* that one can do some advanced things is equivalent to ignoring the fact that REST is pushing WS-* all the way. Different sorts of issues are needed to be discussed IMHO, like will you really get any of REST benefits if you roll out your own transaction system, what's next after you code-generate your RESTful clients and services, how suitable a fine-grained approach for dealing with resources is for different type of scenarios, etc, etc.

Saying that REST creates strong coupling is not very convincing. Big-Bang versioning approach which indeed can quickly help to see if clients are affected does not seem to be the best thing to follow all the time.

Friday, December 21, 2007

About Validation By Projection

David Orchard explains the idea behind Validation By Projection .

For example, Java JAXB 2.0 allows to ignore unknown elements, but only if no validation is enabled. Runtime Processing section, when describing how to process EIIs, has this to say :

If validation is on (i.e. Unmarshaller.getSchema() is not null), then
report a javax.xml.ValidationEvent. Otherwise, this will cause any
unknown elements to be ignored.

That is, you either enable the validation and hence break on seeing ignorable unknown elements or you disable the validation but have to do some sort of internal validation on your own.

Perhaps expecting the code itself to check that all the right data is in as expected is not that bad an idea at all :-). Thus one of the possible approaches is indeed to just disable the schema-level validation, as suggested in one of the comments for a Solving the XSD Versioning Problem post.
Indeed, using XPath or XSLT to just pick the right data is a simplest way to do your own validation by projection.

But an alarm bell rings in my head. I like XPath and XSLT a lot but, hmm, I need to do it all myself then :-). Which, in fact, may be the best thing in general, but still ... :-). Also, writing an XML Schema instance allowing for the forward compatibility is not a trivial exercise at all.

Hopefully, with the arrival of XML Schema 1.1 things will become simpler in this regard. Reading a schema data structures specification is not for faint-hearted, so I searched for a 'validation' phrase and this text was found :

Content models may now be defined as "open", which means that elements other than those explicitly named will be accepted during validation

Is it something which can already be achieved right now with RelaxNG ?

Monday, December 3, 2007

Narrowing down the problem

Steve Loughran has commented on my previous post, thanks. One thing I'd like to clarify that what I was trying to do is not to push back on the criticism of WSDL per se, but to actually narrow down the problem a bit.

If one takes a WSDL document and runs a wsdltojava tool of some sort against it then the link between this WSDL doc and a generated type is obvious. Now, lets try to reverse-engineer it a bit. Suppose we have a generated Customer type which probably relies on some JAXB mechanics, how do we know which description language was used to generate it in the first place ? It may've been generated from WADL, and it's a REST client library which might be depending on it, which I believe was something Steve was describing in his original email. The point I'm trying to make is that while a description language like WSDL makes it easy to do the code generation, IMHO WSDL in itself is the not the only party to blame so to say...

About Ant vs Maven. I personally still like Ant because I know it better. And by comparing Ant vs Maven and REST vs WS* debates, the only thing I wanted to say that when proponents of competing technologies/approaches discuss pros and cons of their technologies of choice their arguments seem somewhat familiar :-) And yes, I agree, the build scenario Steve described is easier to do with Ant.

Is it about REST vs WS-* ? No, it's about DIY vs Kits

I'm wondering, am I the last one who has understood this somewhat obvious thing ?

While treading on my code generation path and thinking how would one generate a RESTful
client code properly, I suddenly realized it.

It's not about REST vs WS-*. It's about writing or wiring up all code interfacing the outside world manually vs kits generating it for you. It's about developers skilled and motivated enough, not constrained by various governance rules writing their web services vs developers, quite possibly equally skilled at the least, using the generated code.

It's not about REST vs WS-* at all. Because if one creates a RESTful server by using a generated code with annotations occupying 30% of the actual source then it's hard to see what advantage one can get out of it. Addressability ? How good URIs backed up by a generated code can be ?
Because if one generates a RESTful client then it's not clear how different such a code can be from a typical WS client.

Dealing with WS-* services is mostly about dealing with the code generation. WS-Policy adds an extra dimension to the code generation in that your design-time tool can generate a lot of the boilerplate code by analyzing a given service's requirements presented as WS-Policy expressions. WS-Policy brings some dynamism into the runtime as well but that's off-topic. Writing a WS-* code manually is error-prone. Modern WS-* kits do let developers to plugin their own handlers into the execution path, but kits are mostly in control.

Writing RESTful services such that one can get as much advantages associated with REST as possible is mostly about writing the code yourself. It's not said as a critique here. The thing is, in order to write a service which can truly live up to the promise of REST one needs to take charge, take control. In fact, I'm surprised why so many of us are so thrilled about turning existing Java classes into RESTful services. Developers wishing to write a client code which can survive as many changes on the server as possible would be better off writing as much code of their own as possible.

The more I think about the more I'm getting convinced that arguments like 'RESTful services are better' are really arguments in favor of avoiding the code generation. I do start hearing now people saying that REST does not need WADL or why do you need WS-Policy for REST : they do not deal with generated code, they prefer the freedom of dynamic languages or prefer writing their Java or C# web services code themselves, don't mind writing some code which deals with the security, setting up various handlers.

Does it mean it's wrong promoting RESTful services with the help of WADL or JSR311 ? I don't think so. Many users need to take simple first steps in this respect, only if they appreciate over time that exposing their resources the RESTful way gives their application an edge will they move to the next level. Or they might decide to stay with the generated code if it works for them.

Code-generated RESTful services can be the first step for many developers. Especially for those who are lazy like me :-). What would be the difference then between code-generated RESTful and SOAP clients ? A typical soap client interacts with up to two layers of services : with the service itself and with EPRs returned from that service. Likewise, it's difficult to imagine a generated RESTful client dealing with more than 2 layers (beyond collections and their entry resources for ex). It's not a big deal to write a bit more code on top of it to deal with arbitrary number of nested resources, but perhaps it would make sense in cases like this not to code-generate at all.

WS-* clients get a WS-Policy boost and have client runtimes doing a lot of work for them.
That's why it's easier to see why there's a momentum behind AtomPub in the RESTful community : Atom-powered client libraries will hide some complexity from users, and will help them to address a number of security considerations, etc.

What happens next though ? For example, policies are needed everywhere and there're initiatives in the Atom world for example to address the problem of advertising certain service requirements to potential consumers. How far will the Atom community decide to go ? Will they attempt to try to come up with a policy language which will let Atom collections and entries alike to dynamically advertise their advanced capabilities, something which is possible in the WS-* world ? Or will they decide not to make it more complicated ? Arguably, the advanced capabilities WS-Policy can bring into client runtimes may not be needed for the majority of applications.

It does seem it's not about WS-* vs REST at all. Otherwise, the challenge is how to avoid creating REST-*.

In the end, I'd like to comment briefly on a WADL metamodel published post by Jean-Jacques Dubray as I borrowed a REST-* term from that post. I think once can read a lot of interesting stuff on Jean-Jacques's blog, but I found the conclusion of the post to be a bit emotional :-) :

"Good luck with your enterprise implementation of REST :-)"

I don't think it's going to stop people from actually doing REST across the enterprise. I believe one can do. After all, RESTful approach is Turing-complete (I don't remember what exactly does it mean :-) but in this context it means one can do any type of enterprise service with REST).

As far as as I'm concerned the question is what does it mean to do REST across the enterprise ? Something tells me REST-* may not be the way to do it. But it's a bit too complex to guess it right :-)

Friday, November 30, 2007

Is WSDL really a problem ?

I was reading a "REST : avoiding mistakes of SOAP toolkits" post from Steve Loughran. The post was not very different from other similar posts on the subject but there were at least 3 reasons I really wanted to comment. I might be stepping over a thin ice here :-), but I simply can't resist.

So here it goes. Steve 'Ant' Loughran says in his post that "Sanjiva 'Wsdl' Weerawana has stuck some slides comparing REST and WS-*". LOL. I hope Steve will appreciate it.

The post itself is quite interesting, and I think Steve actually concludes that the problem is not really to do with the use of description languages like WSDL but to the way the code generation works.

Lets forget about WSDL 1.1, SOAP encoding, and code-first services and the fact that one can have 100 operations described in WSDL. The problem of dealing with the generated code is universal and spans any type of services and both client and server runtimes, irrespectively of what description language you're using. While having no generated code makes the consumption of data more robust, the code generation is here to stay nonetheless. And there must be a way to make runtimes dealing with populating generated types smarter and client runtimes dealing with such types more adaptable to ignorable changes. And if changes are not ignorable it does not make that much of a difference what approach is used.

But it's really the Steve's 'association' with Ant which I wanted to comment upon in the context of the REST vs WS-* discussion.

Steve says in his "Migrating from Maven to Ant" post (I had hard time finding it, I remember reading it but couldn't find, Steve's blog shows the latest 20 pages, I found it on page 46, after I was about to give up :-)) :

"Maybe I should document how to move back from Maven back to Ant...Personally, I see many advantages of Ant above Maven, most of which come from the fact that the tool lets you do more complex things in your build process. Maven assumes that you are building and shipping components; Ant dictates less. Admittedly, this is personal opinion, but I getting tired of 'ant-sucks-maven-rules' propaganda"

I remember when I was reading it I was feeling the same. I didn't like Maven disrupting my comfort level I had with Ant. I really liked reading Java Development With Ant , it one of the best pragmatic books I've read, and I liked it as much as I liked Michael Kay's XSLT book. Because it taught me to think in Ant and not turn it into a programming language as far as dealing with dependencies is concerned. Then Maven came in. I still don't know how to write a simple 'mojo'. But it actually works and the strange thing is that many many projects use it, people complain but it delivers. Maven takes the dependency and build management to the extreme. If necessary you can go to the antrun plugin but doing so seems so low-level after dealing with Maven, or rather after Maven looking after all you need, as far as setting up target directories, etc, is concerned.

Now, in the above quote from Steve, replace 'Ant' with Web Services and 'Maven' with REST.
Sounds familiar :-) ? Sorry, may be it's only myself who finds the Ant vs Maven arguments so similar to the ones one can hear in REST vs WS debates

Writing a code to consume links

I was wondering, what does it mean to write a client code which deals with resources whose state include links ?

Suppose we have a resource which can be addressed like this :

http://host:port/collection/1/2/3/4

We start from http://host:port/collection, then we get http://host:port/collection/1, etc, until we get http://host:port/collection/1/2/3/4.

What kind of client application we're writing here ? It seems that a code which deals with more than 2 levels (starting from http://host:port/collection/1/2) is actually a navigation tool which should allow a user to get back. Is it a reasonable characterization ?

Sunday, November 25, 2007

Is ATOM the way to do REST ?

James Snell answered to the questions I posted earlier, thanks James.

In response to "Do you think AtomPub is the way to do REST ?" James replies (omitting the first sentence for brevity :-)) :

"There’s really only one way to 'do REST'. Atompub adhers to the REST architectural style to solve a particular range of problems. For that particular range of problems, Atompub is a very good solution".

I liked the answer. For the record, I agree that people wishing to understand REST better should look at the way AtomPub has been designed. My own awareness of AtomPub has risen a lot recently after James Strachan joined IONA :-)

That said, the reason I asked the question that way is that , as far as I can hear and understand, quite a few people reckon that if you want to do REST then just use Atom, due to the fact its format is understood by various tools, etc.

Here's what interests me. In my own understanding, exposing public data as REST resources is promising. AtomPub is good for dealing with collections of data, and thus it can be used not only to deal with feeds. It primarily deals with two levels of resources, parent resources (collections) and their children, entries.

So I'm wondering, would AtomPub be a good fit for dealing with RESTful services where parent resources have more than one descendant, ex : collection/child1/child2/child3 ?

Offtopic : I also liked James's response to "What is a better way to protect investments made into WS-* ?" :

"We should be more concerned about protecting investments in our business goals than in protecting our investments in any specific set of technologies."

It's interesting, I'm wondering how wide is a gap between these 2 concerns ? I'll muse more about it later.

Friday, November 23, 2007

REST for developers - it's only a beginning

In my last post I asked Steve Vinoski some questions about REST and WebServices. I'd like to comment on the questions a bit more. First though I'd like to thank Steve for taking my questions the way he did and providing interesting answers , likewise I appreciate the comments made by others.

One thing I'd like to clarify is that I personally do not need to be convinced about the viability of REST or the fact that WEB is going to affect one way or the other the way we do the distributed software. As I mentioned earlier and indeed as obvious from the public efforts of various vendors, REST is being noticed and thought about.

My questions were really a reaction to the very last "no contest" phrase in the Steve's post . I respect the Steve's visionary opinion and by no means it was kind of "how could you say it after all you've done with Corba" reaction. "No contest" may well prove to be much closer to the reality at a wider scale sooner or later , but it just does not work for me yet.

Generally, arguments like "If you do REST your application will scale to millions of requests" or "Look, Google and Yahoo does it" or "it's just simpler" simply do not work for me : it may well point to the serious lack of vision on my behalf :-) , but I find such arguments sometimes being somewhat detached from the reality and this is exactly why I liked RESTful Web Services : while favoring REST it was not very absolutist :-) at the same time.

So to some of the questions. Inspired by the Out Of the Frying Pan post where the first counter-argument is hidden in highlighted comments :-), I wanted to say through my random set of questions that IMHO there's still a lot which needs to be done for REST for it to truly start competing with WebServices, as far as attracting large groups of developers is concerned, as far as writing RESTful applications for more than just browsers and generic Atom readers is concerned.

Questions 1, 2 : Code generation, dealing with the change, large-scale client-side REST programming.
I'm not sure the problem of the generated code being too brittle is specific to Web Services for a start. The only thing which is specific to Web Services is the interaction model built around SOAP. Bill D'Ora in his post "Why To Use Atom" (it seems to have disappeared from Bill's blog :-)) says that one of the main reasons to use Atom is that Atom clients know how to ignore unrecognized extensions. I'd like to say that if you look at an atom:feed and hello:world elements coming from the wire, there's absolutely no difference as far as dealing with unknown extensions in atom:feed and hello:world elements is concerned.
Client runtimes dealing with the generated code have to learn how to optionally ignore unknown extensions. It won't solve all the problems arising from producers changing their services but it will mitigate the problem of dealing with the change. There's already some support in both NET and Java for it to start happening. Better code generation can benefit all types of client consumers but it can help popularize the client-side REST development too and let users to jump to their data of interest straight away. You might say it will trick users into believing WEB is not there, so what ? :-), it's just a start, don't get me started on the code-first development in REST on the server side :-)

WADL : it's strongest feature IMHO is to let to point to the data you want to deal with by using XPath expressions. I wonder, can one use it to deal with all the AtomPub services out there to avoid a need to install a client-side library for every type of AtomPub/GData like protocol ?

3. Is idea of interface is broken ?
I was referring to this comment further commented upon here : "we're not confused at all that a BIG part of the attraction to folks is opaque URIs and no metadata".
Are minimalistic interfaces viable ? I believe so. Coupled with a good extensibilty policy for data pushed through them can make such interfaces a bit friendlier to totally generic interfaces without compromising much the need to deal with changes. Ultimately, a generic interface is understood by all the generic tools but I don't believe that at the moment one can "switch communities" easier with generic interfaces when writing not so generic consumers : GET and friends do not tell you about the semantics.

4. Software modules using generic methods : I was somewhat emotional here, sorry :-) If everyone wants only generic interfaces then why don't we start advocating all the software components using the same generic interface WEB uses and have them relying on various properties passed through call contexts to understand what needs to be done. No need for assemblies, OSGI and for a first class diagram in the Stefan's presentation :-)

5. Is WS-Policy useless ? Dan asks Is Security The Killer Feature of WS-* ? I won't go into this topic :-). But IMHO WS-Policy is a very strong feature indeed . Facilitating the interaction with services which have complex requirements, both at the tooling level (design time) and at runtime is what WS-Policy can bring. That's why either a similar language, or may be even the same language needs to be used for RESTful services for doubts in their ability to deliver complex applications without forcing clients to do sophisticated bootstrapping code are to disappear.

There're more questions, but this post is getting long so I'll comment on one of the last ones :

"What is the difference between service factories found in Corba and RESTful services creating new child resources as part of POST (as far as managing child resources is concerned" ?

I agree, the interface is different. But what I was really trying to say that as far as dealing with transient resources is concerned to which few if any of possible advantages of REST can be applied, I'm not that sure at all that creating a new resource per every POST scales well. You need to delete them explicitly, extra call, what if the client ends up with a stale URI, etc. I think coarse-grained interfaces may have an edge here. Possibly in areas of distributed activities, etc...

That's it so far, thanks to everyone for commenting and reading !

Friday, November 16, 2007

Questions for Steve

First I've read a Dare Obasanjo story about getting disappointed in (Big) Web Services and having never to go back to them again. Dare refers to the James Snell's notes.

It's interesting how everyone picks up statements from these notes, which means they're good. Sanjiva Weerawarana picked up a note about the complexity of REST , while Dare picked up a note where James said he'd never had to go back to Web Services again since starting working in a WebAhead project. Don Box said he liked the notes too but I'd be interested to know what exactly he liked. It's offtopic but my favorite notes are :

* “Atompub + Required Vendor Specific Extensions == New Protocol”
* "Anyone who picks a technology just because it’s popular in 2007 is ..."

Anyway, Dare concludes :

"At this point I realize I’m flogging a dead horse. The folks I know from across the industry who have to build large scale Web services on the Web today at Google, Yahoo!, Facebook, Windows Live, Amazon, etc are using RESTful Web services. The only times I encounter someone with good things to say about WS-* is if it is their job to pimp these technologies or they have already “invested” in WS-* and want to defend that investment."

I've read the Dare's post and I thought, with sadness :-), why people like Dare and James have become so disappointed. Did they think that Web Services will rule the WEB ? Dare says he was not happy translating a SOAP RSS service into a RSS feed, I can feel his pain. Have they ever looked at Web Services as an integration technology ? I thought for a sec the last question would be a killer question...

Only if it was so simple :-). As it happens people who has seen the real stuff have become totally disappointed too. One of them is Steve Vinoski. Steve says in his post Dare is Right You build Real Working System with REST

"Finally, I realized that WS-* was simply not worth it to any customer or to me. My decision to leave WS-* behind and use only REST was based entirely on real-world commercial integration issues"

and concludes :

"Nowadays, all the distributed systems development I do is REST-oriented. I know from significant first-hand experience what both sides of the coin look like, and there’s no question that REST-oriented systems are easier and less expensive to develop, and far less costly to extend and manage. Like Dare said, anyone who thinks otherwise is either so emotionally or monetarily attached to WS-* that they can’t be objective, or they don’t actually write any code or build or maintain any actual systems. It’s no contest, really."

It feels like the last nail has been banged into a WS-* coffin. It feels like a strong message indeed. No compromise. Either REST or nothing.

I'd like to ask Steve few questions. It's tough asking Steve questions for someone like myself who was 'lured' to IONA by the desire to work with people like Steve . It was not only a desire to be together with IONA people like Steve though which eventually brought me to IONA, but a mission (impossible) to solve all the world problems with the help of web services :-) ! So I'd like to ask Steve few questions about software, web services and REST, in no particular order :

1. Do you think a client code generation is evil ? If yes, do you expect people to do manual programming on a large scale ?
2. If code generation is acceptable, would you welcome WADL? If yes, what to do with generated client types with respect to the change management ?
3. Do you think the idea of interfaces is broken ? Do you see any point in creating minimalistic yet not generic interfaces with encouraging users to focus on data ?
4. Would you expect in the future even software modules interacting with each other through a common generic interfaces ?
5. "WS-* was simply not worth it to any customer or to me" - was it not ?
6. Do you think WS-Policy is a useless technology ?
7. Do you think AtomPub is the best way to do REST ? Is AtomPub better than SOAP ?
8. What is a better way to protect investments made into WS-* ? Throw them away and start from scratch or start embracing the WEB while improving on what can be done with WS-* ?
9. Do you think an "integration" problem IONA has been so good at is an "overblown" problem ?
10. Can you please, if possible, describe a bit what kind of (software) clients will use your RESTful systems, (Web)UI tools or client software modules pushing the data up the response chain ?
11. What is the difference between service factories found in Corba and RESTful services creating new child resources as part of POST (as far as managing child resources is concerned) ?
12. Do you always prefer dealing with fine-grained resources ?

Perhaps more questions to come later.

Thanks.

Monday, November 12, 2007

Out Of The Frying Pan and Eating the Cake

Don Box says :

"Personally, my dream stack would be ubiquitous WS-Security/WS-Trust over HTTP GET and POST and tossing out WSDL in favor of doing direct XML programming against payloads from VB9 (or XQuery), but hey, I have unusual tastes."

In response Sam Ruby posts Out Of the Frying Pan. I've read it first time and thought, hmm..., I'm not getting it. Lets try few more times while focusing on the text Sam highlighted. Still my brain does not help me. Apparently, it was not only me who found a text to be a bit difficult to grasp so to say :-)

Ok, so I've tried to see what others are saying, may be they'll help to understand.

From Dare Obasanjo who keeps banging nails into the WS-* coffin (or WS-* vs REST discussion?) :-), I've found about OAuth. So little time so much to learn. Some knowledgeable people say OAuth may be incomplete yet and I'm wondering will it do well beyond web applications and social web-networks ? I need to learn this stuff. Anyway, I've seen him commenting on the vested interests of various web properties, it's getting closer but what is it that Sam was trying to say about WS-Security ?

Now, I'm seeing "Having one's cake and eating it too" from Don Box. These guys can understand each other from half a word :-). Following their exchange :

Sam : "Out of the box, exactly what security mechanisms does WS-Security provide? "

Don : "Without a mechanism for getting a token, not much. I typically think of WS-Security and WS-Trust as a unit, but you are correct, they are distinct. "

Sam : "Using only those two specs, what "ValueType" does one use for Kerberos? X509?"

Hmm... I think I'm nearly getting it now, but not quite. Dan's post comes to the rescue, or comments to be precise :

Sam Ruby : "Executive summary: Don’s criticism is that HTTP amounts to little more than a pluggable framework for incompatible authentication schemes. His proposed replacement? A pluggable framework for incompatible authentication schemes."

Here we go. It's brilliant. That's why I like the blogosphere. One day you can get something like :

"Guess what ? Web Services are like CORBA and half of your web services projects will fail". Do you really want to talk about it ?

The next day one can get something really interesting. I feel I've just learnt more about WS-Security and WS-Trust than I thought I'd known before.

And here's my 2c answer to Sam's question :

It's spot on that WS-Security is really a pluggable framework for incompatible authentication schemes. It's however the "framework" which makes a difference, at least for a time being. It makes it easier to deal with Kerberos and X509 by dictating that all goes into soap headers. This and the fact that some services to be secured are coarse-grained. No problems with security departments ignoring the interop concerns :-) It can make it easier to do a product-level support for doing advanced web services security. And WS-Policy can make it easy to consume WS-Security expressions.

Have I missed the subtle point of Sam's question ? may be, but it was fun trying :-)

Dan asks :

"Maybe the question we need to ask is how do we do WS-SX related stuff over Just HTTP"

How about :

atom:feed/atom:headers/ :-) ?

Sunday, November 11, 2007

Pragmatic notes

James Snell posts excellent notes on the notes about QCon Stefan posted earlier.
These notes are interesting to read because they're pragmatic in nature.

I don't want to select certain entries there to prove that, you know, even people which
support REST strongly do believe that it's complex. They still believe REST is better but
such notes are much more thought-provoking. Such notes have more potential to convince and generate healthy debates.

One note from James has attracted my attention :

"Now that I’m working for IBM’s WebAhead group, building and supporting applications that are being used by tens of thousands of my fellow IBMers, I haven’t come across a single use case where WS-* would be a suitable fit."

This is a strong point all right. One thing I'd like to say though that there're not too many people out there now which believe that Web Services can rule the WEB. It seems perfectly normal that no Web Services are used in the applications WebAhead group is working upon. I don't know much about those applications, but I'm presuming we're talking about clients being able to consume with their (Web)UI tools the *data* coming out of the blogs, push new data into those blogs, etc. Blogs may not the the only web apps, but we're talking about generic tools dealing with the data, right ? May be I'm just not understanding, I apologise if my one sentence analysis misses the actual reality.

Friday, November 9, 2007

Pragmatism vs Religion

Stefan posted notes on some of the presentations made at QCon SF. Going a bit off topic I'd like to say that Stefan does a really great job of providing a lot of links to various resources about REST, and Web Services. His blog reminds me of the great Eric's Pulse Java blog which unfortunately is now defunct. I don't agree with some things Stefan says but I love his blog nonetheless :-)

So back to Stefan's notes. First, a Sanjiva Weerawarana's talk was presented. In his talk Sanjiva is talking about pros and cons of REST and trying to say that WS-* may not be the dead horse yet. Going to the QCon with this presentation was quite an extraordinary thing to do I'd say.

One of the other talks Stefan presented was a Pete Lacey's talk. Here's the first note :

"Agrees with everything Steve said, disagrees with almost everything Sanjiva said".

and then

"scalability: both at runtime (1 million simultaneous clients) and ability to connect (500,000 to begin with)"

> Yea, you can't beat this argument, all right

and

"information accesible to one degree or the other to anyone: managers, shadow IT, proto-geeks and mom"

> Really ? One can just take any URI out there and share it with everyone ?

etc, etc...

Sigh...It's like listening to a broken record which starts from a beginning and then reaches the end and then gets back to the start, etc.

It's so typical. It also leads to an interesting observation.

When Web Services where starting to take on the world, most of people thinking of them as being a good step forward would likely say : REST ? Huh ? No way, it just does not work. One can remember Mark Baker trying to raise an awareness of REST on all the forums out there and being ignored nonetheless, what is he talking about ?

So some time has passed. Most of the people in the enterprise world are well aware now of what REST is. Enterprise is watching and trying to get something out of it. People start to understand that Web is powerful and rather than fight with it it's better to learn how to be closer to it. Vendors start providing tools making it easy to expose data to the WEB. Other things are happening.

It's a pragmatic approach. Web Services proponents are listening, learning, adapting and thinking. They understand Web matters. They'll be the first to adapt their products once they understand and see it's working.

So what about RESTafarians ? Do they listen to what Web Services proponents are saying to them ? No. Just No. REST wins. Period. Web Services are going to die. If you're with REST you'll scale, you'll manage versioning, extensibility in a much simpler way and you can share your information with everyone. And no one will call you being 'enterprisey' so you can afford facing everyone else on the beer party rather than being ignored.

When I was young I used to feel stronger with my older and bigger friends standing behind me.
It reminds me of RESTafarians just throwing at you all the time the same arguments while blissfully ignoring whatever one says to them about Big Web Services. REST is simple, Web Services are complex, just do everything with REST and take it easy. You can never lose if you have friends like WEB, Google and Yahoo standing behind you. Do Google's services scale naturally, because their resources are addressable or do they scale because of things like BigTable ? Who cares.

This is a religion. Can a religious person admit something may not be ideal in the religion. No. You're either in or out, if you admit it you're going to lose the trust of others...

Pragmatism leads to results being achieved. Religion leads to conflicts and wars.

Friday, November 2, 2007

JAX-WS 3.0 Wish List

JAX-WS is a good effort. It's much better compared to JAX-RPC , it's much more flexible, and one can use it to write all types of services. Granted, it's used most of the times for building Plain Old RPC SOAP Services (PORSS :-)), but it does have good features and it relies on all those cool Java things like annotations, generics and Executors.

JAX-WS is quite widespread but there's also a new kid on the block out there, JAX-RS. At Sun, JAX-WS and JAX-RS are implemented in two different projects. There's also a clear indication that future runtimes will attempt to combine the best out of all worlds, they'll try to catch two rabbits at the same time, if you wish.

So I'm curious what the future holds for JAX-WS ? Will it survive ? Or will we see a single JAX specification only in some time ? Whatever the future for JAX-WS holds, here's a wish list for its version 3.0 (if it ever happens to come to live) :

1. Deprecate Java-first web-services development. Yea, developers like it, things like annotations are so appealing. There's a catch though. It just does not work. Do you blame WSDL for the perceived interoperability issues of Web Services ? Nope, huge WSDLs are just partly to blame. It's that code-first development which makes it so easy for developers to quickly come up with a web service and send around that byte[] array or that List or Map.

By the way, I think I've seen proposals to quickly generate a WADL contract out of your code, sounds familiar, doesn't it ?

Code-first web services development is evil which many people have talked about before.
Well, ServiceContract-first development is not exactly a very pleasant experience either but it's better nonetheless. Policy-First Development which I'll chat about later will make it easier. Tools like Artix Registry/Repository are doing it (briefly referring to IONA's offering here) and it's going to blow away the code-first development and make it easier to deal with WSDL and indeed WADL contracts.

2. Deprecate doc-literal-wrapped. This one is a strong opponent. It helps you to create little nice-looking signatures completely shielding you from the reality that it's an implementation of the web service which you're dealing with :

int doIt(int x, int y);

It's nice, isn't it ?

The only problem that as soon as you add an ignorable property to the type which have been unwrapped all the signatures break. This is actually a problem of the code-generation, not that of web services. Sometimes this is exactly what people want. But there's a better way to control the change, just don't add extensibility points into your schema thus ensuring the validation will fail. Either way, the signatures should always be something like :

ComplexTypeReturn doIt(ComplexTypeRequest in);

This way one can control the change much better. More on it later, on the code generation for client runtimes and on 'must ignore'.
I'm wondering, when a WADL contract will be used to generate the code, will Sun come up with the schema design pattern which will do the same thing doc-literal-wrapped does ?

WS-Policy will rock the world of web services

WS-Policy is the most important thing which happened in the world of Web Services since their invention (since SOAP, for better or worse, depending on what your appreciation of it is, came to the scene).

One will see complex specifications like WS-Security and WS-BusinessActivity, touted as being the cause of interoperabilty problems, coming to life, with UI tools completely hiding the details from developers and with client runtimes getting closer to requiring very little if any configuration at all.

Policies are obviously not specific to SOAP-based web services. One can do, for example, a message-level security for RESTful services, no probs. And yes, these services may want to advertise their runtime capabilities so that the client tools can ensure no manual coding is done. Look no further, WS-Policy language is simple yet powerful to cover a lot of ground in the world of policies.

By the way, a short off-topic statement on WS-Security, SOAP and REST. Yes, HTTPS just works and yes one can do a message-level security for RESTful services. WS-Security may offer a way too many options but there'll always be some customer out there whiich will find the use for all of those options. The thing about SOAP is that it makes it easier to productise something like WS-Security. Once a customer will get it working, with the help of WS-SecurityPolicy at the start and those SOAP headers at runtime, it will be a tough decision to make to drop it because the underlying protocol (SOAP) is considered to be a way too complicated especially after all SOAP nodes will learn how to reply to GETs.

It can also be tough to describe that a message-level security is done on a per-resource basis. It's the multitude of resources which can make it difficult to do something like message-level security on all or some of them, by describing such requirements in advance for example. A policy language like WS-Policy can help here too by facilitating the auto-discovery of requirements.

Wednesday, October 31, 2007

One for reads, many for writes

So we have GET and POST. And we also have PUT, UPDATE and PATCH. Please note UPDATE is the verb introduced by Web3S, while PATCH is the verb likely to be used in AtomPub based applications.

POST, PUT, UPDATE and PATCH are all about writes. They have different semantics. But they're about writes. Actually, POST and PUT can both be used to create new resources, depending on who is in charge (thanks to the Book). Also, POST with a Multipart-Related content type seems similar to this possible application of PATCH.

PUT is considered idempotent. Unless it is used to create new resources. Well, it's still might stay idempotent in this case, unless no POST is allowed. Otherwise you create your new resource with PUT and in no time someone will start POSTing to it after discovering it through GET and then DELETing it.

The point of this post is not to try to claim all these write verbs are confusing. They're not confusing as long as you know what you're doing, when to use those verbs and have built your resource handlers properly, etc. They can help you to add more meaning to the updates.

Knowing when it's better to apply POST versus PUT for ex is a useful thing to know. It can help you build a purest RESTful application. Rather than have a bunch of overloaded execute methods, it's appealing to have verbs whose names suggest some semantics.

When I see some healthy discussions when what verb should be used, it reminds me of some healthy discussions I've seen about how to write a good Java application for example. It's about the skills, about knowing your tools.

There's only one thing I don't understand : what difference does it make for WEB ?

Not yet, but may be I'll see it in some time. The main question for me is not whether it's better to have create() and update() methods or just several overloaded execute methods. Rather, will it make any difference to the WEB at large that some applications are using GET and POST while other applications are using GET, POST and PUT ? So far, discussions about when to use POST vs PUT etc focus more on the technical purity rather than on the interoperability.

For example, all the HTML forms out there use POST even when deleting resources and it has not crippled the WEB, hasn't it ? Future forms will use PUT and DELETE but what difference will it make for the WEB ?

This interesting example demonstrates how PATCH can be used to do what GData does today with batch updates. Yes, it's interesting, I'm sure it will work, but what difference will it make for the WEB ? Doesn't it seem a bit too complicated just for the purpose of avoiding overloaded POSTs ?

SOAP-based services are blamed for the fact that a lot of custom verbs are pushed over POST. In many cases it's a fair point (actually, I'd like to muse about it later). Now, as far as the interoperability in a RESTful world is concerned, one needs to know the semantics of a given application in order to decide when POST vs PUT vs PATCH vs UPDATE should be used. It's unlikely a tool will be written soon which will understand itself when to use which verb when updating a given resource. If you're using an Atom powered client then may be a decision will be made by the tool. But while no doubt Atom adoption rate will increase, it's also of little doubt that other RESTful protocols, both generic and custom ones, will also progress.

The popular argument that a uniform interface lets one switch the communities easily won't work with a lot of write verbs being used out there. This argument in itself is not very practical anyway. Uniform interface does not tell you about the application or when to use which verb. HTTP OPTIONS is there but it may not always help. Say, in Amazon S3 PUT is used to create new resources while in AtomPub it's used to update the state of a given existing resource.

At least I can imagine how one can write a truly generic tool, possibly working in a semantic web world (I wish someone wrote a book like RESTful Web Services about it), if all the resources out there supported a single update method.

A single generic read method is good enough for all. Why is everyone so fixed on having many write verbs ?

Also, I'm wondering, does a true interoperability exist only in the world of fantasies ?

Sunday, October 21, 2007

Doubts about links in banking applications

Stefan Titkov has posted a Doubts about links entry where he talks about why doing

GET http://example.com/192879202039374738

is better than having a method like

Customer getCustomer(ID id) :

If I have an id, I need to know that a) it is, indeed, a customer ID and b)
that I have to call get getCustomer() method to retrieve more information.
There’s no agreement, no uniformity to the interface.

etc... This is all the proper REST talk from Stefan.

The reason this entry caught my attention was that a sample application referred to was a banking application. Surprisingly, bank/accounts are talked about very often in the context of discussions about web services, I myself used to refer to banking applications few times before.
Surprise, surprise, I've never written a banking application before :-), but those banks are so handy when discussing applications built around factory patterns.

As it happens, I'm thinking a lot, like many other people too, when and why I would use a RESTful style when building a service as opposed to using a more coarse-grained approach.

Questions like : what is the audience (who is going to consume the service), what is an ultimate benefit, how practical it is, etc are bothering me. I see a lot of potential from exposing different types of data resources to the WEB. Their state can be transient but top-level resources themselves should be fairly stable, as far as their life-time is concerned.

After all, as far as programming REST is concerned it's all about making the life of consumers easier, right ? It's nice when they can use their browser and see the application data, or use, say, an Atom-enabled reader and check the events coming of my application. It's cool when they can build mashups on top of my own data.

So when I see people saying, ok, when you do your banking applications, just use GET when referring to say people's accounts, because it's RESTful, I'm getting confused. I'd love to see at least one analysis out there which would explain what does it mean, practically, to write a banking application using a RESTful approach.

When I'm doing my online banking I'm going to http://mybank/internet, login there, and start a transient and secure session. I'm not going to add a link to my account to my Favourites folder nor I'm going to build a cool mashup on top of it. There's unlikely to be some kind of generic intermediary sitting between the client and a server and doing some advanced caching.

I'd like to understand what does to mean, to write a banking application using a RESTful approach. I'd also love to see, at least once, someone unreservedly advocating REST, saying : may be for some types of applications a resource-oriented approach might not be the best fit...

I know, I can write most of my applications using a RESTful approach. But I also know that I can use XSLT to split a 1MB string using recursive functions or solve a complex chess problem, the question is, what for ?, as XSLT is really perfect at doing apply-templates and match, but not at solving algorithmic problems.

So as far as I'm concerned I'd like to see the ultimate goal of going with REST for a given service, rather than doing it for just the sake of it and then failing to figure out, who is going to benefit from it ?, while telling at the same time to all my friends that I've written a RESTful service.

I'm looking forward to seeing more pragmatic and practical discussions in this area.















Sunday, October 14, 2007

When to use Atom

Yaron Goland has published a thought-provoking entry about Atom. It's fun to read too.
I've never seen Star Wars before, not a single episode, I should've. I didn't immediately recognized
who a General Weasdel was, and only after reading an interesting discussion on a Sam Ruby's blog did I realize who Darth Sudsy was :-).

In short, one of the questions Yaron raises is : when to use documents in the Atom syndication format (ASF) given that one has to tunnel custom XML inside individual atom entires as opposed to just passing this given XML around as is ?

Whether the example Yaron uses is contrived or not, it's hard not to notice, that yes, one just adds some extra layer of complexity when wrapping the content inside Atom entries. Yes, the example shown can be reformatted to make it more readable, but one still will have some markup there which has nothing to do with the original content.

So when is it worth it ? I've tried to contemplate a bit about it here and I'm glad to see this discussion happening now.

As far as I'm concerned, figuring out when to use ASF is the least difficult part. Dare Obasanjo points to the fact that Atom is good at representing the streams of (timestamped) microcontent and Sam Ruby and Yaron offer some thoughts on when Atom is better be used. This comment also suggests that Atom-wrapping a given content is not a de-facto choice, it depends on what people want to achieve by doing so.

I see ASF be particularly good at representing arbitraty types of events, for example.

The difficult question is when is it really worth using Atom Pub as an application level protocol of choice ? Just because there're Atom-enabled client tools out there ? So far I feel it matters only when the generic tools are targeted, but I may be wrong.

Another reason which is cited often enough is that Google does is, with its GData protocol. Oh, man, Darth Goo-Goo-L and his general G'Day tah, :-), that is. The idea of a wide-spread internet programming with the help of GData-enabled client libraries might not be that far-fetched at all, you never know :-).

And I thought it's all just about sending simple XML around :-). The battle is just beginning, which format to use and what protocol to use, and so on and so forth :-)

Friday, October 12, 2007

My simple WEB

In my simple WEB I primarily care about 3 main things :

* GET
* Addressability and Links
* Ignore Unknown Extensions

Web services needs to be addressable, when possible and practical. This will let them live in the WEB. Consumers can GET something out of such services easily. Yea, they need to also be able to update these services somehow. So lets add POST.

Now, everyone knows about PUT, DELETE and some other verbs but so far I don't quite understand how my simple WEB will benefit from PUT and DELETE, so I'll leave them out for now. Once I understand I'll welcome them in.

I'd naively assume that this is all what is needed to have all the services I've built to play with each other nicely, irrespectively of the style used to develop these services.

What really surprises me in all those debates about which style of building web services wins is that very rarely, if ever, the consumer's ability to ignore unknown extensions, aka forward compatibility, is mentioned as an absolutely key ingredient.

This is one of those things which truly makes the WEB scale, as far as the economics associated with the cost of the change and usability of client tools are concerned. In a RESTful world, the focus is on the data. This makes it easier to deal with extensions : one just extends the language.
In a not so RESTful world people are often tempted to deal with new extensions by introducing yet another method or yet another interface all the time even when it's avoidable. The lesson from a RESTful word is to focus on the data extensibility and not on the interface extensibility and interface changes.

Versioning and extensibility is a fascinating subject and it's off-topic so I'd rather chat more about it later, I'll refer to what some thought leaders out there say about it.

Thursday, September 20, 2007

Restful Web Services Review - Part 1

Restful Web Services book is the one to read these days. I enjoyed reading this book for a number of reasons. I've found this book challenging me and teaching me along the way.
I've decided to split a review into two parts. This blog entry is about what I liked about the book, what I found interesting. Next part will be about me trying to question some of the assumptions made in the book.

So here's what I liked about the book, in no particular order.

This book is written by people who believe in and more importantly, practise REST. This alone is a strong enough reason for anyone wishing to understand REST better to go and get the book.

I liked the style of the book, it is very moderate. Compared to some of the bloggers out there bashing WS-* to death, the authors showed how one needs to convince, not by using populist proclamations, but by rolling up the sleeves and teaching and showing how it all works. I found it encouraging me to think harder, trying to understand it better.

I thought I knew how POST was different from PUT. Oh, well :-) Now I do know ! Their description of why AmasonS3 service uses PUT to create new resources was both informative and practical.

The big plus of this book is that it presents a lot of the important information relevant to REST, HTTP, WEB which is spread across emailing archives, blogs and articles in a clear and concise way such that it's easy to understand and appreciate.

I liked a "My Web Service is my Web Site" idea. XHTML is given a lot of attention in this book and the effort to make XHTML a better language for the WEB deserves a lot of respect. Seeing how this format can be used such that the same response can be consumed by humans and machines was interesting. Microformats is a powerful idea all right.

I said it above but I found the coverage of big Web Services and contrasting them to RESTful services be moderate. Yes, authors believe REST does better overall but their conclusion didn't make me feel defensive. Obviously, the suggestion to try to refactor SOAP services such that they can respond to GETs is a good one which many people like.

Examples of how URIs should be formatted for different types of search requests were helpful. Things like when to use a comma, etc.

GET matters : one of the main messages I totally agree with. I personally believe GET matters most. I don't want to go into things like hypermedia of the application state (connectedness as authors describe it), late binding, etc. It's the ability to GET on the link is what really matters IMHO. GET and addressability.

Uniform interface : universal clients. Powerful. More comments though on it in the second part.

Great point about links, about how resources relate to each other, about driving the application from one state to the other one.

Suggestions to create separate resources to deal with asynchronous operations and to model the relationships are practical and useful.

Set of best practices on how to build RESTful web services is possibly the best one I've ever read.

There're other things which I may've missed.

Overall, I liked the book a lot. Useful, practical, insightful, and yes, challenging, if you're still, like myself, see some value in those embattled WS-* :-).
I have little doubt now one can rewrite pretty much every web service out there using a true 100% RESTful approach. I'm not convinced it is the right idea though in all cases where HTTP is involved. Or may be I just don't understand it and not seeing far enough. I'm open and I'm learning. I'll try to argue with a book :-) a bit in the second part.

Do I recommend the book ? Of course I do !!!

REST and data sources

Dare Obasanjo has recently posted an interesting entry commenting on the formats Astoria will support.

What actually attracted my attention was the very first sentence saying :

"I’ve mentioned in previous posts that various folks at Microsoft have come to grips with the fact that RESTful Web services are the best way to expose data sources on the Web".

I personally believe this is exactly what RESTful Web Services are best at : at exposing data sources on the Web. Furthermore, I feel it's public data sources (those sources which are visible to more than one consumer) which can be exposed in a most efficient manner. It's when dealing with such public sources when most important RESTful properties such as linkability start paying off IMHO.

By no means I wish to imply that this is what Dare wanted to say. It is just what is said in that sentence fits perfectly well into my current view on what RESTful services are best suited for.

I'm still not convinced/certain that RESTful Web Services are the best fit for solving all types of problems in a web services space. I've heard some other people saying the same and I agree with them. I'll muse more about it later.

ATOM and WS-Policy

You may want to ask, what can such specification as WS-Policy, with its WS- prefix :-), can do with Atom Publication Protocol (APP).

There're several things which I'd like to comment upon.

First thing is that the notion of Policy is by no means specific to the world where big (term borrowed from the RESTful Web Services book) WS-Services live.

Second, WS-Policy Framework and WS-Policy Attachment specifications are actually very neutral. They both use existing policy profiles like WS-SecurityPolicy in the samples but in itself nothing stops people using their own domain-specific policy expressions.

Framework specification is solid, the compromise between the complexity and simplicity has been achieved, one can build primitive and sophisticated policy expressions using a limited set of WS-Policy language operators.

Attachment specification is fairly simple, it defines a number of attachment mechanisms, and then proceeds and recommends how polices can be attached to existing WSDL contracts and UDDI registries.It also defines a mechanism to attach policies to arbitrary XML elements.

I've recently read an APP Feature Discover Draft. It's an interesting read. Here's one paragraph from the Introduction section :

"This document introduces an extension to the Atom Publishing Protocol (Atompub) service document [I-D.ietf-atompub-protocol] format that allows for the documentation and discovery of behaviors, functions or capabilities supported by an Atompub collection. Examples of such capabilities can include the preservation of certain kinds of content, support for draft entries, support for the scheduled publication of entries, use of a particular set of Atom format extensions, and so on."

APP is now considered be suitable for much more than just describing the process for working with feed collections. Typical RESTful service will deal with collections and its members.

Likewise, from the Introduction, it seems obvious to me that the goal which APP Feature Discover Draft is trying to achieve is the one WS-Policy is trying to achieve too except that WS-Policy is not trying to tackle the discovery problem in its current version.

It seems to me that WS-Policy might play the role of a bridge between the two worlds. APP is obviously a solid RESTful protocol. However it may be unrealistic to expect that everyone will use only APP in the future. There're other RESTful protocols around, such as the one dealing with Web3S documents, and there'll be a number of others too. What will unite all of these RESTful protocols is that the majority of them will support XML.

On the other hand, WS-Policy defines an attachment mechanisms which let WS-Policy expressions be attached to arbitrary XML documents. WS-Policy Attachment does not mandate what types of documents policies should be attached to, as opposed to the APP Feature Discover Draft which is linked to APP documents.

Here's an example from APP Feature Discover Draft, but using WS-Policy this time:
<service
xmlns='http://www.w3.org/2007/app'
xmlns:atom='http://www.w3.org/2005/Atom'
xmlns:wsp="http://www.w3.org/ns/ws-policy>
<workspace>
<atom:title>My Workspace</atom:title>
<collection href='...'>
<atom:title>My Atom Collection</atom:title>
<accept>application/atom+xml;type=entry</accept>
<wsp:policyreference uri='http://purl.org/atompub/features/1.0/supportsDraft'/>
</collection>
</workspace>
</service>


Using PolicyReference might not be correct as the policy engine may try to dereference it. I'm not sure one can express the capability with WS-Policy language the same short way as suggested in the APP Feature Discover Draft.
Perhaps something like wsp:PolicyAssertion can be introduced into the language to let policy authors say :

<wsp:policyassertion uri='http://purl.org/atompub/features/1.0/supportsDraft'/>


In meantime a better option might be :
<wsp:policy>
<f:supportsdraft f='http://purl.org/atompub/features'/>
</wsp:policy>


This option may seem more verbose but a Policy operator provides for a grouping of policies and for more sophisticated policy combinations. Once we have more than one feature it can be handy to group them and this is where a Policy operator immediately pays off. Policy expressions can be applied to both service, collection and entry atom elements and to the custom content wrapped inside entries.

Atom clients can then do things like policy intersection by matching their own set of policy requirements with those associated with a given collection, entries, etc.

This analysis can be flawed in that it misses some important details. That said, it's difficult not to notice the obvious similarity between the goals APP Feature Discover Draft and WS-Policy are trying to achieve. It would be good to see a common language for expressing capabilities and requirements.

Uniform XML Vocabularies on the Programmable Web

The question which I've been thinking about recently is what difference do uniform XML vocabularies (formats), as opposed to custom ones, make on the programmable web.


Specifically, while looking at Atom Syndication Format and Atom Publishing Protocol, I'm trying to answer the question :

What difference does it make to the consumer that the given collection resource's state is represented as an Atom feed with members being represented as this feed's entries as opposed to using a custom XML format to represent the state.

I think I start understanding better why people advocate using an Atom format to capture typical collection/member representations. One of the most cited reasons is that a lot of client tools understand the Atom format. This is an important enough reason for choosing a format which is widely supported. For ex, GData-based client tools can be both, say, Google Calendar clients and simple ATOM interpreters at the same time.


However, what I'm still not certain about, is what is the real audience of such uniform formats is.


Lets take one step back first. Many people asked : why do you use XML as opposed to that custom binary format ? One of the answers was that there were a lot of XML tools available on the market already. When writing client applications, you don't need to pick up a new parser every time, you just use the same XMLParser, the same favourite XML API, and get to the data of interest, feed them into JAXB generated classes or apply XPath expressions, and do something with it. It makes sense for all types of client applications, be they UI-based tools interfacing the humans or low-level applications passing the data along the chain for some further processing.

Now, if we have two different XML formats representing the same collection resource, then it's still XML. It's trivial to get to the data of interest and find the links to traverse the collection further, using XPath, WADL's cool feature to find the data of interest, etc. Using one format as opposed to the other one does not help much to understand associated semantics though

Lets say I write an application based on the uniform format like Atom, say, I write a Google Calendar application. If I get this application to consume an Atom collection representing the book library then the only thing I can do with this collection is to show it is to the human user, even though the Atom book collection may contains hints as to how to deal with a given book entry.


This makes me think that the uniformity of XML vocabularies in itself does not matter much. What matter more is the associated processing model.

For example, lets take SOAP. Understanding SOAP Envelope and Body does not help the application to understand semantics but it can let it become a SOAP node.

Atom Syndication Format describes how a feed may contain entries, but APP completes the picture by describing the processing model. This processing model is a sound RESTful model. But one can deal with collection resources RESTfully while defining custom XML formats too.

I think it all mostly matters to generic tools which can let a user to browse through the Atom collection, like a browser can let a user to browse through the collection resource whose state is represented as an XHTML page. Browser understands html links, an ATOM-enabled tool can understand atom:link entries and let user to browse through any ATOM-wrapped collection.

As I said, I'm not quite sure how other types of clients can benefit. If they need to handle books then dealing with books represented as atom entries won't help them to magically start dealing with fruits also represented as atom entries

I may've got it wrong but I'm still looking into it and I'll be interested to learn more about how uniform XML vocabularies can be applied. I've read this entry on the Microsoft Astoria blog recently, it was interesting to see the reasoning which led Pablo Castro to deciding in favour of supporting Atom.




























Thursday, July 12, 2007

Meeting the JetBrains team

I've had a chance recently to meet the people who do InlelliJ. Virtually. After reading one of the blog posts about different IDEs I proceeded to the JetBrains site and after doing few clicks I somehow ended up at the page listing photos of their team.
It was an interesting experience, seeing all the different people and reading their recommendations on things like what to read, etc. They're all so different. It kind of adds some personality to the software products they do.

Wednesday, July 4, 2007

WSDL is evil and the blame game

I'm tired of reading posts blaming WSDL, WS-* specifications, requiring those behind it all to apologise and proclaiming REST is the only ever way to do it all. Common, lets stop it. I'm finding it both hilarious and ridiculous.
If you truly believe in REST then my advise is to teach and not to blame. Teach it the way the authors of the RESTful Web Services do it (though WSDL is not treated as a winner there too :-)).
And be realistic. I'm wondering what WSDL blamers will start saying when people will do RPC-style services with WADL. Blaming WSDL for allowing people to write RPC-based services is pointless. Teaching on how to concentrate on the data and to evolve these data has much more value.

Thursday, June 28, 2007

Some comments on Web3S

Microsoft has recently published the Web3S specification which describes a RESTful protocol for accessing Windows Live Services.


I've read about it in this Dare Obasanjo's blog entry. I've read this entry and associated Web3S FAQ with interest, simply because it's interesting to see how people are approaching the problem of creating a RESTful protocol.

I've then read some comments. I liked this one, I can learn something from there, and I didn't like much this one, as it's more about bashing the Microsoft than evaluating Web3S, except for the reference to the complexity of some URIs in Web3S documents.


I have few comments so far. I'm not a REST expert. I'm a newbie in the area of building RESTful protocols. I can say that openly, not a big deal :-). A couple of things caught my eye.


1. I liked the idea of the merge, by using PUT and the specific content type indicating to the resource handler that it's a merge, not a complete replacement. I think it's a good way to handle what seems like different flavoures of the same task : update the state.

I've seen many long discussions on why it's important not to mix POST and PUT. I tend to agree with people suggesting use GET for retrievals and POST for everything else. I think the main driving thing behind this low-REST idea is that it's not clear what is the real benefit for the WEB in general in people using PUT as opposed to POST.

So if the protocol goes to some length and makes a clear provision for when POST, PUT and DELETE should be used then it should be welcomed but IMHO the RESTful protocol should be much more conservative in introducing new verbs to handle different shades of grey given that PUT is the verb which is still probably underused and misunderstood.

That's why I thought the idea of the merge utilising the existing PUT was cool. Sam Ruby said in his comments that it makes Web3S a single user database, but I reckon some optimistic concurrency techique can be brought to the resque.

That's why I don't quite understand the necessity of introducing a new HTTP verb Update. Actually, I've read again and I think I do. I probably like more the way GData does batch updates as this avoid the introduction of the new verb. Nor do I understand the necessity of introducing a new verb Patch. Will all these new verbs bring more value to the WEB ? Or fragmentation among its users ?

2. From the spec :
"Although String Information Items (SIIs) are modeled as resources they currently do not have their own URLs and therefore are addressed only in the context of EIIs. E.g. the value of an SII would be set by setting the value of its parent EII."

It's not clear whether SIIs will one day have their own URIs, as in

http://example.net/stuff/morestuff/net.examples.articles/net.example.article(8383)/net.example.authors/net.example.author(23455)/org.example.lastname.

IMHO it will be a mistake for cases like "lastname". What value a resource like the one above can bring in isolation ? It's state will be "Smith" or "Brown".

That's it so far...

Wednesday, June 27, 2007

The first post

So here's my first public post. It's an introduction really. I'm working for Iona Technologies in Dublin, Ireland. In a vibrant company, in a vibrant city. I've worked and lived here for 8 years, quite a long period of time.

My home country is Belarus and my home city is Minsk. I'm belorussian, though many people I know reckon I'm either all the way from Kiev :-) or from Russia. I don't mind, my mother is from northern Russia, and the high chance is that someone from my ancestors came from Ukraine. These countries have been closely related for centures.

I've titled my blog "Musing about web services". This is because web services is where my professional interests are and this is what I'll mostly be talking about. About WS-* services, about RESTful services, just about everything related to the issues like : how to do it right, how to get the best out of all the technologies available there, etc.
My position is general is that no silver bullet exists in this space and I'll stick to it while doing my posts.

I may rename the blog over time if I find myself posting off-topic :-)

Stay tuned.