Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Perl Books Media Programming Book Reviews IT Technology

XML and Perl 138

davorg writes "One of Perl's great strengths is in processing text files. That is, after all, why it became so popular for generating dynamic web pages -- web pages are just text (albeit text that is supposed to follow particular rules). As XML is just another text format, it follows that Perl will be just as good at processing XML documents. It's therefore surprising that using Perl for XML processing hasn't received much attention until recently. That's not saying that there hasn't been work going on in that area -- many of the Perl XML processing modules have long and honourable histories -- it's just that the world outside of the Perl community doesn't seem to have taken much notice of this work. This is all set to change with the publication of this book and O'Reilly's Perl and XML." Read on to see how well Davorg thinks this book introduces XML text processing with Perl to the wider world.
XML and Perl
author Mark Riehl, Ilya Sterin
pages 378
publisher New Rider
rating 8
reviewer Davorg
ISBN 0735712891
summary Good introduction to processing XML with Perl

XML and Perl is written by two well-known members of the Perl XML community. Both are frequent contributors to the "perl-xml" mailing list, so there's certainly no doubt that they know what they are talking about. Which is always a good thing in a technical book.

The book is made up of five sections. The first section has a couple of chapters which introduce you to the concepts covered in the book. Chapter one introduces you separately to XML and Perl and then chapter two takes a first look at how you can use Perl to process XML. This chapter finishes with two example programs for parsing simple XML documents.

Section two goes into a lot more detail about parsing XML documents with Perl. Chapter three looks at event-driven parsing using XML::Parser and XML::Parser::PerlSAX to demonstrate to build example programs before going to talk in some detail about XML::SAX which is currently the state of the art in event-driven XML parsing in Perl. It also looks at XML::Xerces which is a Perl interface to the Apache Software Foundation's Xerces parser. Chapter four covers tree based XML parsing and presents examples using XML::Simple, XML::Twig, XML::DOM and XML::LibXML. In both of these chapters the pros and cons of each of the modules are discussed in detail so that you can easily decide which solution to use in any given situation.

Section three covers generating XML documents. In chapter five we look at generating XML from text sources using simple print statements and also the modules XML::Writer and XML::Handler::YAWriter. Chapter six looks at taking data from a database and turning that into XML using modules like XML::Generator::DBI and XML::DBMS. Chapter seven looks at miscellaneous other input formats and contains examples using XML::SAXDriver::CSV and XML::SAXDriver::Excel.

Section four covers more advanced topics. Chapter eight is about XML transformations and filtering. This chapter covers using XSLT to transform XML documents. It covers the modules XML::LibXSLT, XML::Sabletron and XML::XPath.

Chapter nine goes into detail about Matt Sergeant's AxKit, the Apache XML Kit which allows you to create a website in XML and automatically deliver it to your visitors in the correct format.

Chapter ten rounds off the book with a look at using Perl to create web services. It looks at the two most common modules for creating web services in Perl - XML::RPC and SOAP::Lite.

Finally, section five contains the appendices which provide more background on the introductions to XML and Perl from chapter one.

There was one small point that I found a little annoying when reading the book: Each example was accompanied with a sample of the XML documents to be processed together with both a DTD and an XML Schema definition for the document. This seemed to me to be overkill. Did we really need both DTDs and XML Schemas for every example. I would have found it less distracting if one (or even both) of these had been moved to an appendix.

That small complaint aside, I found it a useful and interesting book. It will be very useful to Perl programmers (like myself) who will increasingly be expected to process (and provide) data in XML formats.


You can purchase XML and Perl from bn.com. Slashdot welcomes readers' book reviews -- to see your own review here, read the book review guidelines, then visit the submission page.

This discussion has been archived. No new comments can be posted.

XML and Perl

Comments Filter:
  • Though the reviewer didn't think so, I like it when DTD and XML Schema examples are side by side. Having looked at DTD's for quite some time now, have to change gears to the new standard of using XML schemas.

    Would be nice to have a book with more than just one chapter on web services. There are a plethura of Java/C# web services books out there, but it's hard to find one on there just for Perl, PHP, etc.
  • ... but I thought Perl was a write-only language? How can I be expected to read the book, if it's just gibberish like Perl? Geez. :) (Okay, fine - I admit it - I kinda like Perl. But that's another story.)
    • Perl excels at text processing in part because Perl excels at regular expressions. They are part of the language, instead of a tacked-on library interface. It is easier to extract arbitrarily formatted text using Perl, while other languages have a more difficult time, since the regular expressions don't come naturally in them.

      XML has a regular, structured format. It is easily parsed, but almost no one parses it directly. They use a model which represents the data, usually some form of DOM or SAX. Libraries are present in most languages. The need to rely heavily on regular expressions isn't there, and it allows people to choose other languages without paying a huge development penalty.

      Not that there isn't a development penalty, but the penalty is mostly same as developing under that language normally. Developing in C will generally take more time than in, say, Perl, Tcl, or Python, because of low-level issues that the other languages don't have. The resulting code, though, isn't necessarily uglier or different in structure.

      There are lots of pages on Perl and XML (check google if you don't believe me), but it just seems that Perl doesn't have the overwhelming advantage on other languages on this subject. That's not to say it isn't useful. But if I were to do XML processing, I probably wouldn't be using Perl.

      Unless it was to process nasty, arbitrarily formated text into XML.

      If you really want your Perl script to be write only, use "chmod 0333 myScript.perl". Nifty language that is constantly coaxing you to the dark side, begging you to give in to your inner desires, to write code that will rip the sanity from those who look at it!
      • I haven't looked in to XML on Perl (although I had a friend write a regexp-based parser of a fairly large XML feed), but I did look passingly at XML in PHP ... it seemed like PHP had a fairly decent implementation. I plan to explore the PHP version in the future, but if the need exists, I'm always open to Perl. :)

        I've never even looked at Python code, but I hear it has a few ... oddities ... over other languages. I've never heard anyone say it sucks, though, so that's a plus. :)
  • by Anonymous Coward on Thursday January 30, 2003 @12:51PM (#5189685)
    The whole point of XML is that it is NOT just a string of text. That's why Perl isn't particularly any better than Java or C++ or VB or whatever for processing XML - you're going to be using a library that gives you SAX or DOM access to your XML, and you'll never need to know that there's a text representation being serialized onto some wires somewhere.
    • True and then not so. Perl's flexible data structures and OO make it a simpler approach than languages that think XML == Object Serialisation. It is also very likely that a lot of what you're going to see flying by in SAX or hanging around in DOM will be text. Sometimes lots of it, sometimes text that has non-XML structure and requires microparsing.

      But anyway, what really puts Perl ahead of the pack (together with Python, the only viable competitor I've tried -- Java is really lagging these days) is its large wealth of SAX (and to a lesser degree, DOM) tools. All sorts of very useful filters can be grabbed, complex pipeline management is a given, the SAX writing framework is cool, there are SAX parsers for many non-XML formats, etc.

    • by consumer ( 9588 ) on Thursday January 30, 2003 @01:31PM (#5189894)
      Let's see...

      • Editable in emacs (or vi). Check.
      • Grep-able. Check.
      • Diff-able. Check.
      • Understandable to the naked eye. Check.

      Sure smells like text to me.

      • by Anonymous Coward
        What you're looking at there is one possible representation of an XML document. What you can see is NOT XML. XML is an idea - a hierarchical data structure. If you're manipulating some XML programatically, you should be manipulating this hierarchical data structure, and you'll be using some sort of API (SAX or DOM, probably) to do so. You should emphatically NOT be manipulating text strings. Any code of the form
        tag = tag + "</" + tagname + ">"
        means you're doing it wrong.

        So, no, XML is not editable in emacs (or vi), grep-able, diff-able or understandable to the naked eye. Go and think about it again.

        • by EvlG ( 24576 )
          I think it is interesting to note that this is precisely the reason that XML is poorly suited for any task that requires human intervention.
          • ... which of course kicks the chair out from under of one of the primary arguments of the XML snakeoil salesmen, that XML is "human-readable", to say nothing of "human-editable".

            'jfb
            • I'm not sure that "human-readible" is a primary argument for XML, but I don't think it matters much. ASCII codes aren't "human-readible" either, text editors convert them to characters we can read.

              You can't efficiently use a text editor to edit pictures, sounds or movies but this doesn't limit our ability to edit them using more appropriate tools.

              If I were going to edit or process XML, I would use the best tool for the job and if that's not a text editor, so what?
              • > I'm not sure that "human-readible" is a primary
                > argument for XML ...

                Sure it is. It's the entire justification for having a text-based protocol -- otherwise, why waste the cycles?

                'jfb
                • >Sure it is. It's the entire justification for
                  >having a text-based protocol -- otherwise, why
                  >waste the cycles?

                  You don't use the text-based *representation* unless you are marshalling or unmarshalling the data. When you work with bound XML objects, you are using the document model as a container for methods to process the data, but not necessarily as a means to present the data in a text format.

                  You can use XML to represent data which is stored in a RDBMS. Naturally you can see that just because your query is presented as Document Nodes, and/or translated to a document marked up according to some DTD, that the document or the object in memory is not the same thing as the data in the database.

                  XML in a text file is not "the data", unless that's where your application needs it. There are plenty of applications for XML where the data never sees ascii at all.

                  Read up on JAXB.

            • What is the meaning of your sig? I am curious, because I feel that the lack of two-phase commit is one of the major issues preventing the open-source rdmbs servers from competing fully with Oracle. I was curious if your sig was related to this issue.

              maru
        • you're doing it wrong.

          ...

          So, no, XML is not editable in emacs (or vi), grep-able, diff-able or understandable to the naked eye. Go and think about it again.

          yes it is.. just because you claim that "you're doing it wrong," doesn't mean it's impossible.

          xml is text just as much as html is.. are you going to tell me that html isn't editable in emacs or human-readable? how is html different from DocBook, for example?

        • What you're looking at there is one possible representation of an XML document.

          I couldn't agree less. In fact, XML is one possible representation of the abstract hierarchical data structure you described. Furthermore, XML is in fact a text representation. There are many other ways you could represent that data structure (eg: a custom binary format, records in a relational or hierarchical database, a object serialised to a binary stream etc) but none of them are XML.

          The W3C themselves say that "XML is text [w3.org]" and then go on to point out that advantages of being a text format include:

          • you can look at data without needing the program that produced it
          • you can read it with you favourite text editor
          • it's easier for developers to debug

          They also say: "Like HTML, XML files are text files that people shouldn't have to read, but may when the need arises".

          In parallel with the development of XML, our notion of the definition of 'text' has also moved forward. Through the adoption of standards like Unicode and bridging facilities like encoding declarations, we have moved past 7-bit ASCII as being the one true text.

          To claim that an XML file is not "editable in emacs (or vi), grep-able, diff-able or understandable to the naked eye" is demonstrably untrue. You'll obviously need a text editor that understands whichever encoding the file uses (both emacs and vim fit that bill) but a text editor is a perfectly servicable tool for viewing and editing XML (obviously not the best tool in many cases, but acceptable nontheless)

      • XML is text. XML is not just text.

        The point is that the document conforms to a certain structure: either rigidly (as when validating against a DTD or similar schema definition), loosely (as with well-formed XML, where elements must be closed correctly, but you can mix any elements and attributes you want), or something in between.

        It's not obvious at all that Perl is a natural mix for processing XML. The things which Perl does so well - line-by-line file processing, string operations, regular expressions - are not very useful on XML. (For example you cannot match a balanced tree structure with a regular expression, so you can't use the standard string processing to do something so simple as extract an element and its contents.) Indeed they may lead you in a false direction at first. For quick throwaway tools, where the file is already pretty-printed in a certain way, Perl string operations may do the trick; for building applications that need to handle XML they are inadequate.

        To read and write XML you will need libraries, and that is the case in any language. Perl has a good selection including the standard-API-but-very-slow XML::DOM, the nonstandard-API-but-useful XML::Twig, and the I-used-to-use-it-but-IMHO-it-is-best-avoided XML::Simple. But using these libraries isn't particularly easier from Perl than from any other language.

        The ideal XML processing language would have a type system which could check at compile time whether the output you are generating will be valid for the DTD you have chosen; and it would also map the XML's DTD or schema onto the language's type system at input. For example, no need to get the list of child elements and get the first element from it, if the DTD specifies that there must be exactly one child.
    • Wow, are we arguing about what is text? Now that is an old school computing arguement that I'm not sure the kids will appreciate! (no offense intended.)

      My $.02 : XML is composed of text because it only allows ascii characters. Thats it. Well-formed XML "the language" requires more definitions, but an xml "file" is just another text file format. You're talking about nondeterministic finite automata [unimi.it] quintuple that specifies how XML is parsed. understood, etc. But within that quintuple, I is the set of all ascii characters >= 32 and 128. At least I think that's true. Can someone post if I'm wrong? I appreciate learning of my misconceptions.

    • The whole point of XML is that it is NOT just a string of text. That's why Perl isn't particularly any better than Java or C++ or VB or whatever for processing XML - you're going to be using a library that gives you SAX or DOM access to your XML, and you'll never need to know that there's a text representation being serialized onto some wires somewhere.

      I'll respond to you though many others are making similar arguments. First of all, when you say "XML is NOT just text!" do you mean "XML is NOT merely text" or "XML is not solely text"? I'll agree with the first, but the second is generally not true.

      What noone seems to be mentioning is what you get out of those libraries: you get the entire structure in nodes thanks to the library's parser, but what are the contents of those nodes? Text! You might argue that the element names and most of the attributes are either defined by the dtd/schema, etc. but at least CDDATA will often be abitrary text. And, at least in my experience (mostly web-based applications), there will often be a need to process some of that text, e.g. extract links which are embedded in the text, convert newlines to <br>s, and many other things. And then, isn't it handy when the language reading the contents of those nodes has strong text-handling abilities?

      Just a thought.

      -chris
    • The whole point of XML is that it is NOT just a string of text.

      Actually, the whole point of XML is that it is just a string of text.

      If XML parsers used a file format that wasn't human-readable text, there would be little point in using it, and we would all just stick with object-model databases.

  • Natural? (Score:1, Redundant)

    by CaseyB ( 1105 )
    As XML is just another text format, it follows that Perl will be just as good at processing XML documents.

    Not really. If you're using XML as "just another text format", then you're making a funamental mistake. Within your software, you should always be treating XML as a hierarchical data structure, not as a text stream. Apart from manipulating CDATA or attribute value text, Perl has no particular strength with XML.

    • Re:Natural? (Score:3, Informative)

      by mortonda ( 5175 )
      Not really. If you're using XML as "just another text format", then you're making a funamental mistake. Within your software, you should always be treating XML as a hierarchical data structure, not as a text stream. Apart from manipulating CDATA or attribute value text, Perl has no particular strength with XML.

      Indeed, the perl only XML libraries are quite slow. I believe most of the quality perl XML handling is done by modules that use C libraries to do the grunt work. However, if the data in the XML itself is text data, then of course, perl and XML are a good match. Add SOAP and mod_perl into the mix, and you got some very nifty tools.

  • Petal (Score:4, Informative)

    by Chris Croome ( 24340 ) on Thursday January 30, 2003 @01:00PM (#5189727) Journal

    One new, and cool, Perl XML module that people might not know about is Petal [cpan.org] (PErl Template Attribute Language).

    It is an implementation of the Zope TAL (Template Attribute Language) specification [zope.org] and it basically allows you to create XML templates where all the templating commands are just attributes of existing tags.

    This allows things like XHTML templates which are very WYSIWYG friendly since the editors don't do anything with attributes that they don't know about.

  • This was a review? (Score:4, Insightful)

    by Syris ( 129850 ) on Thursday January 30, 2003 @01:03PM (#5189749)
    I'm sorry, but this just wasn't a terribly deep review and well below par for /. Listing contents of a book and then nitpicking a detail don't a book review make.


    How effective were the examples? How easy to read and understand were the general concepts? Were the descriptions of libraries and API's clear? Was the writing generally readable?


    Would this book even make a good reference?


    Jeez, anyone want to follow up the post with a real review?

  • by Euphonious Coward ( 189818 ) on Thursday January 30, 2003 @01:07PM (#5189774)
    The whole point of XML is to free us from having to do the kinds of things Perl is meant for. Absent free-form text munging, Perl really has no advantage over other languages. At the same time, it has real deficits for people who need to know they have solved a problem correctly and completely.

    (For reference, see this rant [underlevel.net] by the brilliant net.kook Erik Naggum. The most quotable bit, for the lazy among you, is

    ...[Perl] rewards idiotic behavior in a way that no other language or tool has ever done, and on top of it, it punishes conscientiousness and quality craftsmanship -- put simply: you can commit any dirty hack in a few minutes in perl, but you can't write an elegant, maintainabale program that becomes an asset to both you and your employer; you can make something work, but you can't really figure out its complete set of failure modes and conditions of failure. (how do you tell when a regexp has a false positive match?)
    )


    • The whole point of XML is to free us from having to do the kinds of things Perl is meant for. Absent free-form text munging, Perl really has no advantage over other languages. At the same time, it has real deficits for people who need to know they have solved a problem correctly and completely.

      I essentially agree with you but one still has the problem of merging a non-xml document into xml form. Here perl can be fairly useful.
      • Slugbait writes: "...one still has the problem of merging a non-xml document into xml form."

        That's true, but the Perl XML-handling modules are not much help for that.


        • Not much help? If you start counting the number of Perl modules that expose a SAX interface to non-XML data (not to mention the host of other super-useful SAX tools) you'll probably find only one egal, Python.




          And if you think that XML has freed us from additional text processing, you obviously haven't used XML much, or at least without much variety. Most people seem constantly bent on including microlanguages in attribute values or text content. Those need good text processing.


    • by Nexus7 ( 2919 ) on Thursday January 30, 2003 @01:27PM (#5189864)
      Well, perhaps not your soul, but your Perll code just reflects the way you think to a greater extent than other languages. This isn't something that's done underhandedly, it is well advertised in every posting in c.l.perl and the Camel book, and every other book about Perl. Which is that Perl is not at all orthogonal, TMTOWDI (there's more than one way to do it). If you want to be rigorous and declare everything and not have your typos become references automatically, you "use strict" and your magic line is "#!/usr/bin/perl -w". If not, well Perl allows you to do that too. If you want objects, you can do that, if not, not.

      If is possible to write quality code in Perl Just because the language allows you to not do so isn't its fault. It doesn't stop you from doing it, because that'd stop you from doing brilliant things.

      To address some specific things you mentioned, you can do full-fledged exception handling in Perl if you want to (with eval and specific modules), or, you know, not. And I'm not familiar with the false positive matches in regexps (perhaps you're referring to some famous problem). But if a regexp doesn't do what you want it to, isn't is wrong? Between // and tr and split I get along just fine.
    • by glwtta ( 532858 ) on Thursday January 30, 2003 @01:45PM (#5189992) Homepage
      how do you tell when a regexp has a false positive match?

      A what? You (or rather the brilliant person being quoted) either mean that it matches a string that the expression isn't supposed to, which would be a serious bug in the language (and I am not aware of any such bugs); or you mean that it matches correctly, but matches things you didn't expect it to, in which case you tell, by (gasp!) testing your code. In any case, how do you tell a "false positive" regexp match in Java?

      but you can't write an elegant, maintainabale program that becomes an asset to both you and your employer

      Perhaps you can't. I have, and I do.

    • by Anonymous Coward

      Okay, I'll bite.

      The whole point of XML is to
      free us from having to do the kinds of things Perl is meant for.

      So how does XML do that in, let's say, system administration?

      Absent free-form text munging, Perl really has no advantage over other languages.

      So ehmm... what type of things is XML made out of? Elements' names, contents, etc, it's all text.

      You can commit any dirty hack in a few minutes in perl, but you can't write an elegant, maintainabale program that becomes an asset to both you and your employer

      You can write a dirty hack in any language. And about the last part: what about CPAN [cpan.org]?

      (How do you tell when a regexp has a false positive match?)

      That would be by understanding the regex, just as any other chunk of code. (Funny, that... When you want to say something bad about Perl, moan about its horrible, illegible, etc regexes. When you want to mention something positive about another language -- especially when comparing to Perl -- mention support for powerful, fast, etc regexes. And advertised as "Perl-compatible" at that.)

      -- Arien

      • Arien asked, "how does XML [free us from doing the kinds of things Perl is meant for] in, let's say, system administration"

        When config files are in XML, they can be munged programmatically without regexp hackery.

        He goes on, "... what type of things is XML made out of? Elements' names, contents, etc, it's all text."

        It's not free-form text, it's structured text. Somebody else pointed out, though, that there is a distressingly large amount of free-form text to be parsed in attribute strings, body text, and (!) comments, that XML structure extraction tools don't help with.

        (I won't answer criticism of Naggum's rant; he's not known as a net.kook for nothing. Take it up with him.)

    • Maybe the author was unable to write anything but hacks, and couldn't make anything elegant or maintainable. I've written programs with multiple subsystems, and put them well into maintenance without a lick of trouble, all in perl.

      Yes, $dd->updsp( 1,3, @ad ) looks worse than $Driver->update_displays( $Display:LOBBY, $Display:CUSTSERV, @additional ), and boy it's just a shame that perl doesn't let me use meaningful identifiers or document API's or forward declare functions for arg checking ahead of time. Oh wait... Really. The argument is dead, continuing to raise it is just trolling.

      I switched to python because I got tired of leaning on my shift key. Tcl has probably the prettiest syntax for me, but as a language it's braindead beyond belief (not to mention slow)
    • Absent free-form text munging, Perl really has no advantage over other languages. At the same time, it has real deficits for people who need to know they have solved a problem correctly and completely.

      Absolutely. Once you get beyond text parsing by standadizing the syntax, the goal of a program is to manipulate objects. XML maps very well into object trees and that is why it is commonly processed using Java and Python. If you want the powerful capabilities of a dynamically typed language, with a simple, easy to learn grammer, then you should use Python for processing XML, not Perl. (Perl's object syntax is as obtuse as the rest of the language and offers no advantages over the elegant object model of Python. In fact, Larry Wall borrowed much of the Perl object design from Python. Use the genuine original, not the imitation.) The standard Python library includes a fine package for navigating through XML data and zero text processing code needs to be written to do this. It's objects all the way down.

      There is a good article [xml.com] that explains how to use Python generators to process XML content. This is something you will never be able to do as easily in either Java or Perl.

    • How do you tell when anything has a false positive match...TESTING
  • Although I agree that Perl/XML sounds like a powerful and flexible way to serve dynamic content, I can't help thinking that it is ultimately better to adapt existing frameworks (Slashcode, PHP-Nuke & friends etc..). Maybe a friendly group of Perl/XML gods will read the book and produce a framework/toolkit that the rest of us mere mortals can use. I suspect that I will buy this book anyway, read it, and after frying my brain for a few days I will stuff it on my bookshelf and walk away with a huge inferiority complex. My bookshelf makes me look like a guru, but secretly, my encyclopaedic knowledge comes from here [bathroomreader.com].
  • and i know there are going to be a lot of posts saying "XML obviates Perl!"...

    but i disagree. Perl absoulely RIPS through this stuff, unlike the Java stuff i've written. sometimes, there's nothing like some good, old-fashioned procedural code to munge one document into another.

    the only problem i had was with UTF-8 stuff. perl really wasn't quite there until perl 5.8, and i'm having trouble finding installs of it on the machines i need to use it on at the university i work for.
    • Perl does not rip through text files. Programmers can rip through perl code, but perl is SLOW. I once rewrote a Perl parser in Java and went from 9hrs to 45mins.
      • Re:i hate perl... (Score:2, Insightful)

        by etcshadow ( 579275 )
        "I once rewrote a Perl parser in Java and went from 9hrs to 45mins"

        Well, shit. I once rewrote a Perl parser in *Perl* and went from 9hrs to 45mins. What the hell kind of flame-bait shit is this!?

        It is true that extremely well-written C code can outperform perl code at anything. It is also true that for things that perl is made for (like ripping through tons of text-data), a typical Perl program will *most likely* do it better than a typical C program, simply because it is making use of more optimized underlying algorithms (even though the actual execution structure is slightly more bloated than C... double-dereferencing pointers, compile-time imediately before run-time, etc). ... However, Java is just as goddamn interpretted as Perl, if not more so! Perl compiles to *native* byte-code prior to execution, unless you are talking about eval'd strings, whereas Java sits in non-native byte-code that has to be interpretted real-time by the VM. Best case: you have a good just-in-time compiler that pulls Java up to even with Perl (that is, compiled imediately prior to run-time into native byte-code).

        Also, Java has all the same disavantages with respect to C... that is more insulation from the *actual* memory (no such thing as a real pointer in either, garbage-collection, etc).

        Anyway, bottom-line is this. If what you say is at all true, then you had a shittily-written Perl program. I promise you that I can write just as shitty a program in Java... does that mean that we should trash Java?!?!? Abso-f*cking-lutely not! I'll do you one better, too: I'll write just as shitty and slow of a parser in Java that doesn't even *look* that bad to someone who doesn't understand the subtleties behind such simple abstractions as strings, lists and arrays.

        I'm very serious with what I said originaly, I have, in fact, taken a Perl parser (a super-light-weight XML parser, actually) and reduced the parse-time by several orders of magnitude. The idiot who wrote it originaly (myself), went walking through the string or stream looking for 's (with a regexp), at the highest level. It is *terribly* slow to strip leading characters off of a long string in Perl (I'm pretty sure that it copies the whole goddamn string, minus those 10 (or however many) characters on the front). I made a *very* simple change, namely this:

        # split on positive lookahead assertion of a ''
        # then we just deal individually with blocks of text that all start
        # with a ''... should save time
        my @xml = split(/(?=)/,' '.$xml);
        shift @xml;

        And, you'll note that I f*cking commented it (something which people just don't seem to understand when they trash perl). Bang! Many orders of magnitude in speed improvement. Simple.

        Anyway, pull your head out of your ass.
        • The above includes several places that *should* have had a less-than character. You'd think that posting "Plain Old Text" would properly escape them as &lt;, but I guess you'd be wrong.

          Oh, well. You know what I meant.
          • Hey, what can you expect? Slashcode is written in perl.

            (it's funny, laugh.)
            • It is funny. On the other hand, that's such a more suitable task for perl than dealing with xml:

              if ($form{Format} eq 'Plain Old Text') {
              $form{Comment} =~ s/&/&amp;/g;
              $form{Comment} =~ s/</&gt;/g;
              $form{Comment} =~ s/>/&lt;/g;
              }

              bing-bang-boom. How hard was that?
              • Would be more suitable, except for the fact that post mode names are misleading.

                Plain old text is not plain text. FAQ claims the following:

                HTML Formatted: You determine the formatting, using allowed HTML tags and entities.

                Plain Old Text: Same as "HTML Formatted", except that
                is automatically inserted for newlines, and other whitespace is converted to non-breaking spaces in a more-or-less intelligent way.

                Extrans: Same as "Plain Old Text", except that & and are converted to entities (no HTML markup allowed).

                Code: Same as "Extrans", but a monospace font is used, and a best attempt is made at performing proper indentation.

                So it seems that "Extrans" (whatever that is supposed to mean) would have done the job...

        • You are right of course. Perl is as fast as Java and paring XML is most assuredly the domain of a scripting language.

          ha ha ha

  • I think one of the main reasons Perl and XML aren't generally used together is because Perl isn't object oriented in the same way the Java and C# are. I know that OO concepts have been bolted on to Perl in the same way the OO was bolted on to C++ and in my opinion with similar results (i.e., kludge-fest). It's very natual in Java to parse an XML doc and get an object, while it's more natural to parse a log file or CSV file with Perl.
    • That's because you see XML more or less as an object serialisation syntax when it has been proven over and over again that there's serious impedance mismatch between those two views (at least, with Java's rather limited view of OO). See XML Schema if you don't think so.

      Don't forget that the Desperate Perl Hacker was in the requirements for XML. And they succeeded pretty well in making XML match Perl.

    • Actually, i use the SAX interface with Perl. This is writing an OO event handler. Same way Java folks do.

      http://sax.perl.org/

      It's just as natural to do OO w/Perl as it is doing text parsing. Perl just doesn't force you to do it one way.
  • Then maybe you should get it from Amazon [amazon.com], where it is $12 cheaper.

    Please Rob, explain to us how whatever deal you have with bn.com is worth your user base overpaying by so much? Users can buy the book through the link above, and I will put a third of my affiliate commission (about $1.40 per copy) towards Perl development projects [affero.net]. This way everybody wins. Using your link, I assume you win, and that bn wins, but your loyal user base is out an additional $12 and I can't imagine your deal with bn.com nets you that much for providing the link.

    • by graxrmelg ( 71438 ) on Thursday January 30, 2003 @01:52PM (#5190033)
      But if people are interested in getting a good price rather than putting a commission into your pocket (and contributing to a company that abuses software patents), maybe they should order it from Bookpool [bookpool.com] instead, for $3 less than Amazon. (I don't have any affiliation with Bookpool.)
      • But if people are interested in getting a good price rather than putting a commission into your pocket (and contributing to a company that abuses software patents),

        They can't really abuse the patent, they can only take advantage of it. If you want to boycott anyone over their one-click patent, boycott the US government that issued them the patent. If you think the patent was issued in error, then provided the prior art to discredit the patent. ...they should order it from Bookpool [bookpool.com] instead, for $3 less than Amazon.

        Except that unless you buy another book to get over BookPool's $40 Free Shipping threshhold you will just be paying that $3 to UPS instead of me and the Perl Development Fund. Amazon's Free Shipping [amazon.com] threshhold of $25 falls conveniently just under their price for the book [amazon.com].

    • Actually, /. used to link to Amazon, and had an affiliate program. Once Amazon started enforcing their one-click patent, and the Amazon boycott began, /. switched to Fatbrain (which was bought by BN).
  • by mattdm ( 1931 ) on Thursday January 30, 2003 @01:45PM (#5189994) Homepage
    I see the table of contents explained in paragraph form. And then one complaint about the organization of the book. And then I expect to read the review, but it's already on to "you can buy this book here", and user comments.

    I know complaining about slashdot stories is like shooting those proverbial barreled fish, but sheesh.
  • XML::Simple (Score:2, Interesting)

    by Anonymous Coward
    I'm seeing a lot of comments that perl doesn't have any particular strengths when dealing with XML. A good module people should check out is XML::Simple. Basically, it automagically turns XML into a nested data structure, and automagically turns a nested data structure into XML. The great thing about it you just make a single API call, and just directly access the data from there without having to learn anything more complicated. Definitely not an end-all solution, but definitely handles the common case wonderfully, and has quite a few handy options to allow more fine tuned control.
    • I think you're better off using XML::Writer and XML::SAX.

      Writing a SAX handler is invaluable for parsing huge XML files. There's only do much you can fit in memory.
  • XML is NOT just a text file (just because we can read it with a simple "more hello.xml"). Perl is good at processing text, because it knows regular expressions and some extensions to them. However, an XML DTD (or a Schema) defines a context-free grammar, which make a language class above the regular languges. That's why we can't fully parse XML files with Perl's RE. A good example would be nested tags that result from recursive grammar rules in the DTD. These cannot be parsed without some serious geekism in Perl RE. However, I love to write those little tools that operate on XML data in Perl. Very often, you can work with regular expressions on context-free/sensitive language data!
  • by HealYourChurchWebSit ( 615198 ) on Thursday January 30, 2003 @03:20PM (#5190505) Homepage


    The reviewer is correct, Perl is a good tool for slamming and jammin' text, including XML. What I'm not so sure of is the quote "It's therefore surprising that using Perl for XML processing hasn't received much attention until recently."

    I mean one need only scroll down the extensive list of CPAN Modules [cpan.org] to see well over 50, as well as many sites/authors devoting [cpan.org] time, energy and resource.

    Similarly, I would point out some press modules supporting web services via XML, such as SOAP::Lite as far back as 02/26/01 [netscape.com] and XML-RPC also in '01 [sourceforge.net] -- or O'Reilly's own XML.com with articles such as "Processing XML with Perl [xml.com]" written shortly after the turn of the millenium.

    Point is, though I personally love Perl, blatant plugs such as "... it's just that the world outside of the Perl community doesn't seem to have taken much notice of this work. This is all set to change with the publication of this book and O'Reilly's Perl and XML." " don't inspire confidence in the reviewer's objectivity.

    • Funny you should mention the reviewer's objectivity.

      He wrote Manning's "Data Munging with Perl". Which is all about "slamming and jammin' text, including XML". In fact, it's got a reasonably large section on XML.

      But he doesn't mention it, and he doesn't push it. Instead he recommends two books from different publishers, neither of which he works for. As far as I know he doesn't provide any content for O'Reilly's oreillynet, but I may be wrong there.

      People who use Perl, and are part of the Perl community, know that you can slice and dice XML wuith Perl. What Dave is trying to say is that the managers and Java/Python/whatever programmers aren't so aware of this.
  • check it out. http://axkit.org/ [axkit.org]

    "Apache AxKit is an XML Application Server for Apache. It provides on-the-fly conversion from XML to any format, such as HTML, WAP or text using either W3C standard techniques, or flexible custom code. AxKit also uses a built-in Perl interpreter to provide some amazingly powerful techniques for XML transformation."

    picture coccoon for perl. using perl for xsp pages and doing pipline transformations on xml. great stuff.
  • Use AxKit! You're selling yourself short if you start to develop a site without it. It's just the ideal way to get the whole separation of content and presentation thing that XML is supposed to be all about. It makes it dirt easy to store your content in XML, use XSLT for transformations and XSP for dynamic back-end processing. Check it out [axkit.org]!

    Also read this [monasticxml.org]

    simon
  • Perl's strength is text processing was its ability to work with (read and generate) poorly structured data. XML makes it easy to create well structured data thus writing document processing code in languages like C++ is easier. People who don't know Perl, or people who learned other XML toolkits first, have less reason to learn XML with Perl.
    • That might be true in a perfect world where everything is XML. In the world in which I live I have to transform some goofy mainframe-generated files ("I don't care how wide you make field 154, just tell me how wide it and I will process the file.") into XML. That makes perl very important.
  • That Perl was geared toward text proccessing has been an obstacle to XML support in my admittedly limited experience. We're trying to interface with a 3rd party system that claims to use XML for data interchange. But because their programmers are used to traditional text-proccessing, their XML support is _very_ kludgy. Stupid things like requiring line feeds after each element, etc.
  • Actually, Perl is mediocre at processing XML/HTML/SGML. Ever write a lex-type state machine parser in Perl? You can do it, but it's not as easy as it should be. "Get next character from string" is slow and/or clunky in Perl. (If strings are long, removing the first character is expensive. And you can't just subscript your way through a string. So you need to manage a small working buffer explicitly, something you shouldn't have to do in a language like Perl.) Perl does tree structures of objects, but Perl 5 objects aren't all that fast. Parsers in Perl tend to either have C components (creating a portability problem) or are slow. This is a lack.

    You can write such parsers as regular expressions, but that makes them even slower.

    Despite this, I parse millions of lines of SGML/HTML/XML into trees of HTML::Element, using only Perl. But it's clunkier than it should be.

If you have a procedure with 10 parameters, you probably missed some.

Working...