decoupling from HTML
M. David Peterson
xmlhacker at gmail.com
Tue Jun 28 10:37:49 PDT 2005
Oooh... that is definitely something that should be reopened and evaluated
to determine a solution that covers not only HTML/XHTML but, as mentioned,
common data feed formats + SVG, XUL, XAML, and/or any other XML format which
in quite a short space of time will be seen more and more as the default
markup (and as such, namespace) for a variety of "pages" served up by
default when a particular framework (e.g. XAML) is known to be available on
the client. In fact when you download and install the latest Indigo/Avalon
WinFX beta release you are given the option to set IIS 6.0 to render the
XAML version of a particular page when the client is capable and the server
contains the proper xaml page in the directory requested by this client.
I realize that the page that is contained at the specified location does
little more than act as a way to locate the proper validation information
and as such can be HTML/XHTML without effecting the rest of the applications
contained on that particular server. But there are enough justifiable
reasons (technical and marketing) for a web server to be allowed to only
serve, for example, RSS and Atom data feeds, that I believe this will become
fairly common place in a fairly short period of time. In fact services like
FeedBurner already allow the ability to host and serve up your data feeds
and as such the requirement to actually have a public site doesnt exist,
instead using a simple tool to create and post your entries directly to
FeedBurner for publication.
What then?
On 6/28/05, Brad Fitzpatrick <brad at danga.com> wrote:
>
>
> On Tue, 28 Jun 2005, Mario Salzer wrote:
>
> > If now HTML/XML parsers check for the presence of <html>,</head>
> > or <body> tags before actually reading out the two <link> tags,
> > they deprive themselves from supporting any other XML formats
> > which were especially designed with 'html compatibility' in mind.
>
> The reason the head checks were done in Net::OpenID::Consumer was to
> prevent people from hi-jacking other people's webpages by leaving
> comments/posts (which their software didn't strip) containing link tags.
>
> But even a regular expression to search for any link tag after we
> ascertain that the document isn't HTML (say, no <html> or <body> or
> <head>), that's still kinda lame. I suppose workable, though, if the
> regexp allows a namespace... but I'd in that case prefer a full-on XML
> parser so we can match on the /correct/ namespace.
>
> In practice, though, I imagine OpenID will be tied to HTML/XHTML, and I
> think that'll be fine.
>
> If you have a proposal though that doesn't add tons of complexity for a
> couple geeks doing XML+XSLT to make their homepage, I'm all ears.
>
> - Brad
>
--
<M:D/>
M. David Peterson
[ http://www.xsltblog.com/ ][ http://www.xmlblogs.net ]
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.danga.com/pipermail/yadis/attachments/20050628/5815daf2/attachment.htm
More information about the yadis
mailing list