I’ve said the same about “Web Services” before. The SOAP and WS-* industry ignored what we already had — the Web — and shoehorned something alien into use instead. We can go a nice long way simply using the good old Web. Paul gives a short example but his example “protocol” users completely application-specific markup. A “weatherml” and some SIP/call markup.
This is the point at which both XLink and RDF people step in and say, “hey, what do these markups have in common? At least give us a cross-domain way of knowing which portion of each document is a hyperlink.” If Web APIs are Just Web Sites, you’d expect it to be easy to find the links between the pages, at least. Well, RDF people don’t shut up at that point (just as well, since you can figure it out by looking at an XML Schema, in theory at least). We then start banging on about cross-domain classes and properties, eg. if the weather markup wanted to talk about cities and locations, … or the call markup wanted to mention people … or the Atom feed wanted a bit of each of those, … why not just mix together domain-specific element names using some shared structural conventions? Which is exactly what RDF does.
How would this change Paul’s story? Well on the one hand, … the markup examples are less fragmented: you don’t have to understand an entirely new XML markup language for each application or domain. On the other, it ops us out of some usefulness from HTTP, since the granularity switches from document-typing to the level of individual properties and statements, meaning that saying things like “Accept: application/weatherml+ml” isn’t so easy to do, since the same bunch of markup might have bits of weatherml, bits of RSS/Atom, bits of Geo markup, bits of FOAF etc.
Perhaps we need some convention for sending HTTP Accept headers for application/rdf+xml where we can also optionally mention some specific RDF vocabularies, or indirectly mention a bundle of them to be used together (an ‘application profile’ in Dublin Core-speak). More on which maybe another time.