×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

179 comments

Bill Frist (-1, Offtopic)

Anonymous Coward | more than 7 years ago | (#15254013)

Has a first post.

Sabatoge? (1)

penguinoid (724646) | more than 7 years ago | (#15254024)

Wasn't this the one that Microsoft was going to sabatoge? What happenned to that?

Re:Sabatoge? (0)

Anonymous Coward | more than 7 years ago | (#15254292)

Real religious types follow their fairy tales TO THE LETTER.

Re:Sabatoge? (1)

CarpetShark (865376) | more than 7 years ago | (#15254347)

Maybe they figured that questioning an honest man's integrity, costing him his job, and risking his mental wellbeing when he's not used to media pressure was enough for a little while. Don't worry, Microsoft will be back to playing hardball on this soon though.

Re:Sabatoge? (1)

iabervon (1971) | more than 7 years ago | (#15254362)

They then said they weren't going to sabotage it, and that they were only on that committee because they were going to push for acceptance of their format. A number of ODF-related organizations had representatives on the committee, and it's common for interested organizations to be members, because they can actually explain the reasoning behind the specification.

Of course, it's hard to say whether the MS rep would be caused problems if the potential hadn't been pointed out, and MS hadn't been forced to promise good behavior.

For that matter, the ODF reps will presumably sabotage MS's submission, just like they accused MS of trying to do. But, of course, they're supposed to stop the group from endorsing standards that are worse than other standards.

word of the day (0)

Anonymous Coward | more than 7 years ago | (#15255114)

Is "sabatoge" something Microsoft-specific or did you mean sabotage?

Hopefully not... (4, Insightful)

albalbo (33890) | more than 7 years ago | (#15254027)

Although ODF is a bit nicer standard from a human point of view, and builds on existing standards, I hope OpenXML isn't accepted simply because having two standards doing the exact same thing is nonsense. They're much more similar than they are different at many levels.

ECMA are welcome to OpenXML, I don't think ISO should accept it.

Re:Hopefully not... (3, Interesting)

DrXym (126579) | more than 7 years ago | (#15254368)

Nicer from a human point of view means less bugs down the line. I just spent a week trying to get an .wsdl to parse through Axis AND .NET's wsdl.exe. Any format that is less opaque, less verbose and more understandable gets my vote.

"human point of view"... (1)

CarpetShark (865376) | more than 7 years ago | (#15254386)

It's not just nicer from a "human point of view"; it's simply more appropriate, technically, for the data it contains. Microsoft's half-hearted attempt at an XML format is like storing cars on a keyring with more keyrings attached: you can put all the parts on there, if you try hard enough, but it just makes no sense, for man OR machine.

Same thing? (2, Interesting)

Anonymous Coward | more than 7 years ago | (#15254499)

So did ODF folks finally decide how to store formulas? Currently every single spreadsheet that supports ODF (not that there are many) stores those as they wish with no defined standard.

Re:Same thing? (0)

Anonymous Coward | more than 7 years ago | (#15254537)

MathML, wasn't it?

Blah-blah buzzword blah-blah-blah (0)

Anonymous Coward | more than 7 years ago | (#15254680)

MathML, wasn't it?
Except no ODF spreadsheet uses it. Next, please.

mathML sucks. (2, Informative)

zippthorne (748122) | more than 7 years ago | (#15254751)

MathML is the worst way to store formulas ever. Anything that takes 5k of text to specify int(from 0 to infinity, exp(-lambda*x**2)dx) correctly is simply stupid. It means hand coding mathML just isn't a viable option for more than a couple very simple equations. We should agree on something similar to a C, Fortran, Matlab, or other programming language notation as the standard way to store equations in the file. The added benefit of potentially being able actually execute at least some of the functions is just icing on the cake.

On a related, but somewhat less relevant note is that I can't find any inexpensive programs that allow the generation of mathML easily. There are a few out there that generate mathML at all, but they seem to concentrate on the typesetting aspect of mathML* and on having an obtuse interface. Why isn't there a easy-to-find, cheap or free (beer or speech), mathML editor that is as easy to use as the equation interface in LyX? (and yes i've tried export-html options in LyX, and attempted to manually convert with commandline utilities but my latex2html functions all seem to be completely braindead.)

*iirc, there is a way to use mathML to store calculable functions, but I have yet to see this implemented, and it takes even MORE text to store the equations.

I think the lack of available editors, and tex converters, especially considering the potential academic utility of mathML is pretty good evidence that it is a poor standard: it hasn't generated enough interest for someone to scratch the itch and write a decent converter/generator/editor.

Re:Hopefully not... (3, Insightful)

Kjella (173770) | more than 7 years ago | (#15254567)

Well, the day Microsoft accepts ODF as their standard is the day pigs are flying in a snowstorm through hell. From what I can tell, it seems anyone looking for a standard is looking at ODF, not the "Microsoft Office 2007"-standard. The MS shops will continue to run MS-only if it's binary or xml, standard or not. If they want to open it up and call it OpenXML so we can get proper documentation to migrate away from it, I really don't think that's going to hurt ODF. At any rate, if they really do the same one would think excellent ODF/OpenXML convertors could be made to make this a non issue. Same way I really don't care if an image has gone from BMP to PNG to TIFF and back again.

Re:Hopefully not... (1)

g2devi (898503) | more than 7 years ago | (#15254797)

Maybe, but don't forget that Microsoft is in an Anti-Trust investigation in Europe. Suppose the EU decided that the only way MS Word would be available is if MS Word support the ODF file format and saved files *by default* in ODF format. That would pretty much level the Word Processing playing field and allow Microsoft to "innovate" freely since the only reason for chosing Microsoft the would be that you like their software.

Re:Hopefully not... (2, Insightful)

moochfish (822730) | more than 7 years ago | (#15255226)

I thought the point of standardizing something is to keep there from being 100 "official" ways to do it. What's the point of having fifteen approved "standard" document formats? I'd say this getting approved is the nail in the coffin for Microsoft's precious standard. There can only one standard and ODF is now it.

Comparison (2, Interesting)

2.7182 (819680) | more than 7 years ago | (#15254047)

If you look at the history of standards, such as done at NIST, usually people try to choose the best thing, but it is hard to forsee what is the best. A good example are the standards associated with how to quantify vibrations in static structures, such as bridges. Looked good in 1948, turned out bad (Tacoma bridge).

Re:Comparison (5, Insightful)

AKAImBatman (238306) | more than 7 years ago | (#15254108)

[What] looked good in 1948, turned out bad (Tacoma bridge).

There's a huge difference between construction engineering and software engineering. In construction engineering, poorly understood physics and unforeseen weather patterns can create unpredictable situations and stresses. In software engineering, the rules of the system are predefined and well understood. While a lot of research goes into ways of doing specific tasks "better", the tradeoffs to each design are usually well understood.

The result is that standardized computer algorithms and formats are rarely incorrect. However, they do become obsolete in relatively short periods of time due to increases in computing power and informational storage/transmission requirements.

I think software is less well understood (1)

plopez (54068) | more than 7 years ago | (#15254316)

Specifically based on this sentence: In software engineering, the rules of the system are predefined and well understood.

1) Many projects are hacked together without good requirements. There are even methodologies which minimize requirments, see RAD or Agile Development. In some cases the users often are unsure of the requirements and they are discovered by accident, if at all.

2) Most requirements have a strong temporal association. Meaning what is a requirements in May may not be a requirment in August due to changes in the economy, the technology being used the regulatory environments (state, national, local or international regulations, treaties or standards) or if a company is purchased or merges with another company. So if a software project lasts 18 months, I will bet the requiremnts will have shifted in that time span.

3) Software cannot be touched, tasted, felt, smelt or seen. Just the results, such as a report, a graphic or a GUI. This makes it *much* harder to understand, esp. by the majority of the population who do not have either the training or the aptitude for this type of abstract reasoning. And we rely in a large part on these folks to help with requirements!

Your statement in the last sentence sort of overlaps point 2 of this comment, BTW.

Just my $.02

Re:Comparison (2, Interesting)

ThePhilips (752041) | more than 7 years ago | (#15254449)

The result is that standardized computer algorithms and formats are rarely incorrect. However, they do become obsolete in relatively short periods of time due to increases in computing power and informational storage/transmission requirements.
In engineering, building blocks are developped probably once per decade. How old concept of building houses of bricks? of wood? etc. You can't do much with physics, which describes the laws of the world we see around us.

In software engineering, earth gets reinvented completely more or less every decade. Every new generation of computers allows newer improved algorithm and new application fields to be sucked in. And everytime people find that the algorithms can be improved even more. 200 hundred years ago, simple automation of money counting was unimaginable. Try to consider what happen in the two centuries. And how the process evolved, if now amount of money has *no* physical equivalent: it's just number our bank stores along with rest of account information. Numbers can evolve thou they exist only in our imagination. You hardly can expect brick or lump of iron to evolve in any similar way.

Standards if they want to remain useful has to evolve. IMHO standard has to include way to add improvements and way to move the improvements under standard umbrella. E.g. HTML is tag based. There is a definition of tag along with its properties. Improvement to HTML can be done in two ways: new property of an exsiting tag or a completely new tag. And with next revision, schemas can be updated to include the improvement. It worked that ways with HTML evolution from ver 1.0 to 4.0 to XHTML 1.0. Make HTML an international standard requiring strict compliance and 6 month aprove period for every new feature - and you would find that the HTML would have never evolved that far the way it did under the rule of W3C.

ODF inherited from XML easy way to add improvements. If ISO workgroup isn't made up of complete [CENSORED] - and luckily to us it isn't - standartization would not stand in a way for improvements.

What is remaining for ODF to be healthy standard - is competing implementations. KOffice is limited to KDE which doesn't run under Windows. Working with OOo every day I wish it was never ported to Windows in first place. I hope the Corel would deliver on promise and add to competion. Having at moment under Windows only OOo as an option - hardly helps ODF adoption.

Re:Comparison (1)

Maxo-Texas (864189) | more than 7 years ago | (#15254468)

I think microsoft office integration is a perfect counter-example to your point.

Integration was wonderful until your computer was networked- then the "storm" of pressure from outside forces caused many unforseen failures.

There are others. If software is -only- used in a static manner then your point holds. But in the real world, only dead software is used that way. Living software is constantly reused in unforeseen ways due to changing circumstances, legalalities, competition, etc.

Re:Comparison (3, Insightful)

guet (525509) | more than 7 years ago | (#15254586)

In software engineering, the rules of the system are predefined and well understood.

Until you give it to the users, or ask it to interact with another program, then it's a different story. The actions of users/other programs are often poorly understood and unforseen, and I'd argue they are analogous to the weather in this situation - they introduce inputs that the programmer would dismiss as impossible or garbage, and promptly crash that 'perfect' program. I'd agree there is a huge difference between contruction and software engineering, but which profession is more rigourous?

The result is that standardized computer algorithms and formats are rarely incorrect.

Algorithms and formats are often incorrect when they actually come to be used because of a misunderstood or misstated problem. Look at the language used to present these pages - HTML, hardly an elegant format. I suppose you could call it correct for some very sloppy values of correct, but really, given the purpose it's being used for (presentation of complex styled text) it is woefully inadequate, and also overengineered in some ways. This problem is inherent in any complex system used by many people, things simply can't be 'correct' for all uses, and often they're not even close. I wonder if that's why the phrase 'Broken as designed' originated in computer programming?

Lastly, formats usually become obsolete because companies want you to buy their new program, not for technical reasons (see Photoshop, Illustrator, Word etc etc). You're trying to factor the human out of programming, and thus ignoring all that is good and bad about it.

Re:Comparison (1)

wrygrin (128912) | more than 7 years ago | (#15254665)

> understood physics and unforeseen weather patterns can
> create unpredictable situations and stresses. In software
> engineering, the rules of the system are predefined and
> well understood. While a lot of research goes into ways
> of doing specific tasks "better", the tradeoffs to each
> design are usually well understood.

that misses an important aspect of software development. many of the hard issues are not about fundamental algorithmic principles, but about supporting development at the frontiers of what's being developed.

software is an intrinsically compositional art, so that the frontiers and even criteria for standards are constantly (and sometimes erratically) shifting. *that* is where many of the challanges in software standards lie - each choice you make precludes some avenues, and some of the time those other avenues are going to prevail, despite your expectations. the software standards landscape is littered with failed standards, just because things turned in different directions than anticipated, for many different reasons.

yet, waiting to see how things shake out before declaring a standard can lead to fragmentation and missed opportunity. the art, as i understand it, is in choosing the place and time to take a stand, and gathering salient input so that you make choices that will support the best opportunities - for intricate notions of "best"...

Re:Comparison (2, Insightful)

ediron2 (246908) | more than 7 years ago | (#15254709)

[What] looked good in 1948, turned out bad (Tacoma bridge).

There's a huge difference between construction engineering and software engineering. In construction engineering, poorly understood physics and unforeseen weather patterns can create unpredictable situations and stresses. In software engineering, the rules of the system are predefined and well understood.

Software...
(snorts)
well-understood...
(busts into laughter)
rules... well defined...
(roars with laughter)

I don't know which is funnier, your post
(laughs louder)
or the fact that it is modded up for insightful instead of for funny.
(falls off chair, gasps, struggles to stop laughing so hard)
C'mon, 'fess up: you were being snarky. And if this was a successful trolling, you are da man...
(busts into giggles again)

Wow...
(wipes tears from eyes)
Software engineering being superior to civil engineering --
(starts laughing again).
Poorly understood physics --
(more laughter).
Man, I wanna party with you -- that's some fsckin' brilliant trollage.

Re:Comparison (1)

starfishsystems (834319) | more than 7 years ago | (#15254495)

I read recently that the Tacoma bridge failure was due to a characteristic of suspended structures that could only have been modelled using chaos theory.

In other words, you're right that the engineering standards in place at the time were not entirely adequate, but there was no adequate alternative either. One would not emerge for about thirty years, at which point engineering practice would be revised.

It seems to me that this pattern of progress is mostly okay. It should give us a big dose of humility when we're contemplating building nuclear reactors, which is why they talk about the Tacoma bridge in engineering classes. On the other hand, an imperfect document standard, say, is quite a bit better than no standard at all.

Re:Comparison (1)

2.7182 (819680) | more than 7 years ago | (#15254568)

There is a lot of hype in the news about chaos theory, but as an applied mathematician I don't see many applications of it. There are a lot of theories as to why it fell, but, in my opinion, chaos theory is neither useful in such a case, or even applicable. If you ask people concrete questions like "what would you actually calculate using chaos theory to show this", I never get an answer.

By definition, chaotic systems are sensitive to initial conditions, so small changes in the intial states make big long term changes. This has been a problem in using the idea of chaos to simulate real systems - noise matters a lot.

Not Chaos Theory (1)

TheOldBear (681288) | more than 7 years ago | (#15255037)

Just simple forced harmonic oscilation. Exactly like a sream of air [or a bow] vibrating a violin string.

One of my professers [at Brooklyn Poly] analyized the failure for his doctoral dissertation. [that would have been about 1940 or so]

He found that the Bronx - Whitestone bridge had the same failure modes, and was responsible for designing some remedial alterations to the structure.

Re:Comparison (1)

Alioth (221270) | more than 7 years ago | (#15254641)

Well, that's if time was running backwards - the Tacoma Bridge collapsed 8 years before the 1948 standard...

Re:Comparison (1)

Cal Paterson (881180) | more than 7 years ago | (#15254738)

But let's be honest, it's pretty obvious to anyone that the MS XML format is shit now. Forget the far future; I don't want to use that today!

Hopefully not? (0)

fireboy1919 (257783) | more than 7 years ago | (#15254065)

I hate XML. It's not easy for humans to read as a wire protocol. It's not easier for computers to read than binaries. So I don't see the point, since that's what it's used for almost exclusively. Adding to that is the fact that attributes and nodes are two different things that are, in general treated the same (and the functionality can be achieved without attributes by making an "attribute" node and putting all the attributes under it).
We should be using something like JSON or YAML.

However, DOM and XSLT are both awesome ideas - especially for parsing documents.

Maybe this will lead them to adopt an XML-equivalent technology that is easier to read and parse.

Re:Hopefully not? (4, Insightful)

albalbo (33890) | more than 7 years ago | (#15254092)

Well, I would agree with you about XSLT - but that's an XML technology, you realise? XSLT is actually one of the handy tools which you have access to. As an example, I was able to convert a large number of documents from HTML to OpenDocument using XSLT, and I would have had to write my own parsers etc. if the files on both sides weren't XML.

XML is handy because there's a lot of wheel reinvention that you just don't need to do. Also, it's not just a way of structuring data - comparison to JSON or YAML isn't really well-founded, they're not feature equivalent.

Re:Hopefully not? (1)

fireboy1919 (257783) | more than 7 years ago | (#15254239)

You're not getting the point, really. XSLT is an XML tech because XML is a wire protocol - a way to serialize data.

With the exception of attributes (which, as I mentioned, are a wierdism of XML), XPath would be about the same if you used it with any other serialization language, and the only difference in XSLT is that it would look like the data serialization language rather than XML.

Which means no extremely noticable differences.

What features of XML are you talking about that aren't in JSON or YAML besides attributes? Not requiring well-formedness? Fine. When I mean "JSON" or "YAML" I really mean "well-formed JSON or YAML." That should make it at least as easy to write parsers for.

DTDs? Work as well with non-xml.

I think by "feature" you actually mean "codebase." That's about all XML has over those other things. And even then you'd get to keep most of it, because a lot of that code is about structure, not formatting. Of course, I'm willing to be enlightened. What can you do with XML that you can't with another wire protocol at least as well?

Re:Hopefully not? (1)

albalbo (33890) | more than 7 years ago | (#15254451)

Well, off the top of my head, here are a few things that XML can do that JSON/etc. cannot do:

  • be obvious about what character set they are encoded in;
  • be written in multiple languages at the same time, without a priori knowledge of which sections are written in which language;
  • include tags/attributes from multiple DTDs within the one document via namespaces;

You cannot solve any of those problems in JSON or YAML in such a way that any other JSON or YAML app can understand without making changes to those apps. XML can do all of those things and more, and it's useful: e.g., embedding SVG into an XHTML document.

Re:Hopefully not? (1)

fireboy1919 (257783) | more than 7 years ago | (#15254872)

Namespaces are their own spec that was added onto XML afterwards. There's nothing keeping that from working anywhere else.

Character set is something that is specified in the document. How would that change?

It seems you're still using "codebase" arguments, rather than "actual features of the format."

You cannot solve any of those problems in JSON or YAML in such a way that any other JSON or YAML app can understand without making changes to those apps. XML can do all of those things and more, and it's useful: e.g., embedding SVG into an XHTML document.

The same can be said for any XML readers that don't support those formats. So?

Re:Hopefully not? (0)

Anonymous Coward | more than 7 years ago | (#15255132)

I'd hate to break it to ya bub, but that, by definition, was a comparison.

Re:Hopefully not? (1)

Amouth (879122) | more than 7 years ago | (#15254098)

i am glad to see someone else whom sees the same light as me.. it makes me have hope in the world

Re:Hopefully not? (1)

cduffy (652) | more than 7 years ago | (#15254526)

If you'd had enough opportunities to work with binary formats built for custom, one-off, non-source available parsers and XML-based formats and tools for manipulating them, I think you'd have changed that opinion by now. XML makes interoperability work much, much easier -- and is vastly more readable than the bit-packing one-off formats people come up with otherwise.

Having a structure which can be used to build unambiguously parsable data data formats is a Good Thing. Having some level of self-documentation properties on top of that is icing on the cake.

Re:Hopefully not? (1)

Amouth (879122) | more than 7 years ago | (#15254875)

it isn't that i majorly dislike xml.. i just dislike people who all of asudden think it is the only way of doing something

Re:Hopefully not? (1)

init100 (915886) | more than 7 years ago | (#15255216)

I also get the impression that binary formats are harder to compress than text formats (e.g. XML). Case in point: Microsoft DOC format vs. OpenDocument Text (ODT) format.

One word. (2, Insightful)

Spy der Mann (805235) | more than 7 years ago | (#15254178)

Interoperability.

I agree there's much overhead having to translate between text and binary data, but the point is that XML isn't used for exclusively processing. It's for INFORMATION INTERCHANGE.

OpenDocument is an xml format, but it's an OPEN format, completely documented and with no loose ends. Furthermore, it's very similar to HTML, so the algorithms to process it are similar, too.

On the other hand, Microsoft's "Open"XML... eew.

Re:One word. (1)

blirp (147278) | more than 7 years ago | (#15254252)

Interoperability

But when everything is strings, you get all the funny strange problems with decimal-, thousand-, time-, and dateseparators. So you gained some and lost a lot.

M.

Re:Hopefully not? (1)

DJCacophony (832334) | more than 7 years ago | (#15254180)

XHTML is an XML technology, and humans can read it easily. MathML is a great language to learn, if you're interested in mathematics. There's more to XML than just plain "XML", that's why it's called the Extensible Markup Language.

Re:Hopefully not? (1)

fireboy1919 (257783) | more than 7 years ago | (#15254482)

XHTML is an XML technology, and humans can read it easily.

No. For example, I have problems reading this bit from an XHTML document: /* Compares I,J, and K and returns the greatest.
*/
function comparisonFunction(i,j,k)
{ if(i<j)
        if(j<k)
            return k;
        else
            return j
      else
        if(i<k)
            return i
        else return k
}

XML doesn't allow any of its nodes to NOT contain valid XML, so you end up using the stupid < > instead of . Either that or you have to use a CDATA tag, which really has no reason being there.

Re:Hopefully not? (1)

DJCacophony (832334) | more than 7 years ago | (#15254920)

I didn't say you couldn't understand it. Just because you don't know what something means doesn't mean it's unreadable. Once it shows up in your web browser it's perfectly readable.

Re:Hopefully not? (1)

etymxris (121288) | more than 7 years ago | (#15254224)

XML is easier to program with and debug. When things go wrong, and they always do eventually, it's really nice to be able to just open up the failing object in a text editor rather than having to piece it apart in a hex editor or relying on a tool specialized for that format that could introduce its own problems.

Re:Hopefully not? (2, Interesting)

jrumney (197329) | more than 7 years ago | (#15254237)

Adding to that is the fact that attributes and nodes are two different things that are, in general treated the same (and the functionality can be achieved without attributes by making an "attribute" node and putting all the attributes under it).

That is perhaps the biggest mistake developers make when they design their XML schema (or DTD), and leads to ...

I hate XML. It's not easy for humans to read as a wire protocol.

If you keep the things that are supposed to be human readable as the text within nodes, and move the rest (formatting instructions etc) into attributes, your XML will be much more readable after some simple processing to remove the nodes. Using attributes for all those small name-value pairs that XML documents are full of also reduces the size and makes parsing more efficient.

Re:Hopefully not? (1)

YU Nicks NE Way (129084) | more than 7 years ago | (#15254304)

If you keep the things that are supposed to be human readable as the text within nodes, and move the rest (formatting instructions etc) into attributes, your XML will be much more readable after some simple processing to remove the nodes.
Interestingly, this is exactly one of the "technical flaws" that GrokLaw claims to have found in OpenXML.

Re:Hopefully not? (1)

albalbo (33890) | more than 7 years ago | (#15254484)

If you keep the things that are supposed to be human readable as the text within nodes, and move the rest (formatting instructions etc) into attributes, your XML will be much more readable after some simple processing to remove the nodes.
Interestingly, this is exactly one of the "technical flaws" that GrokLaw claims to have found in OpenXML.

No, that's not what the article says. OpenXML will not mix node and text siblings; that's nothing to do with whether or not you put data in as text nodes or node attributes.

Re:Hopefully not? (1)

CRCulver (715279) | more than 7 years ago | (#15254241)

I hate XML. It's not easy for humans to read as a wire protocol.

Speak for yourself. I find this markup language fits my thought processes very well, and I enjoy working with XML and its dialects very much.

Re:Hopefully not? (1)

Ernesto Alvarez (750678) | more than 7 years ago | (#15254256)


I hate XML. It's not easy for humans to read as a wire protocol. It's not easier for computers to read than binaries. So I don't see the point, since that's what it's used for almost exclusively.


I think XML has its uses. It is somewhat difficult to read for both humans and computers, but not that much. It's a good compromise since it should be easy to read with the right library, while still human readable for emergencies. I think it's good for machine readable files, like word processor documents and such.

What I do hate is developers thinking that XML is straight human readable. I hate people who uses straight XML config files (that I, being the administrator, have to edit by hand). On that part I'm 100% with you.

Anyway, ODF is XML. It's a bunch of XML files in a zip, so your criticism would apply to both ODF and MSXML.

Re:Hopefully not? (2, Interesting)

AKAImBatman (238306) | more than 7 years ago | (#15254268)

I hate XML. We should be using something like JSON or YAML.

JSON and YAML are more focused formats intended for lightweight transmissions and compatibility with existing computer languages, and tend to complement XML rather than supplant it.

XML is designed as a "catch-all" format that is capable of storing any form of data. That makes it extremely powerful, yet sometimes quite unweildly.

Each format has its tradeoffs, and as a result it is hard to say that one is "better" than the other. For example, XML's verbosity allows for parsing errors to be much more easily identified and repaired while simultaneously preventing accidental errors from going unnoticed. In YAML and JSON it is much easier to place unintended characters or data structures without the parser noticing. Neither one (to my knowledge) has the ability to check the structure of the transmission like XML DTDs and Schemas do.

However, DOM and XSLT are both awesome ideas - especially for parsing documents.

You've just given two reasons for the existence of XML. Both concepts are extensions of the XML concept, and are not necessarly applicable to other data-exchange formats. (At least not without massive changes.)

XML was designed with the DOM in mind so that any type of flat or heirarchical data could easily be loaded and stored programatically. This cuts down on the number of programs that attempt to construct an interchange document manaually. This rigid structure thus makes way for the programatic transformation of such documents, ala XSLT.

Re:Hopefully not? (0)

Anonymous Coward | more than 7 years ago | (#15254766)

"Adding to that is the fact that attributes and nodes are two different things that are, in general treated the same (and the functionality can be achieved without attributes by making an "attribute" node and putting all the attributes under it)."

Perhaps what you are looking for is lisp. It doesn't have the attribute baggage, It's easy to parse, It's easier to type, and it doesn't hold onto the artificial distinction between data and code.

Consider:
      <some_xml foo="1" bar="joe"><body/></some_xml>

      vs

      (some_lisp (foo 1) (bar "joe") (body))

What technical weaknesses in OpenXML? (1, Insightful)

Anonymous Coward | more than 7 years ago | (#15254166)

What specific technical weaknesses are in OpenXML? Sincerely I'm interested (and need to make some decisions for my company on this), but I don't want religious crap, give me the real technical differences.

Re:What technical weaknesses in OpenXML? (1)

oztiks (921504) | more than 7 years ago | (#15254278)

... I don't want religious crap, give me the real technical differences. ...

It is believed by the jewish faith that the real document format has yet to come

Re:What technical weaknesses in OpenXML? (3, Insightful)

mrchaotica (681592) | more than 7 years ago | (#15254456)

Well, for one, ODF is an ISO standard and is implemented in a bunch of different programs now, and OpenXML isn't. Lack of interoperability is a pretty crippling technical weakness, since we're talking about a document format.

Not technical, but business reasons... (4, Informative)

Svartalf (2997) | more than 7 years ago | (#15254771)

OpenXML is patent ridden and in a way that is problematic at best, compared to OpenDocument. ODF is also patent ridden, but unlike MS' offering, the patents have free licensing for conformant implementations and conformant means to the official stated spec, with the possibility of extensions becoming part thereof- unlike MS' offering which requires you to meet MS' shifting definition of what is/isn't compliant (i.e. it's not explicitly stated...) and you don't get to add improvements unless MS embraces and extends them themselves (i.e. if you've got extensions and MS doesn't approve of them, you're NOT at all compliant and can be sued for patent infringement...).

Technically, they're the same. This is the reason why people can't understand why MS is insistent on NOT supporting ODF as a format and trying to push OpenXML- unless they've got some ulterior motive. Now, they've little valid excuse for it.

Re:What technical weaknesses in OpenXML? (4, Informative)

DrXym (126579) | more than 7 years ago | (#15254871)

The Groklaw article points out a few of them. The most damning is that Microsoft has chosen to ignore existing, reusable standards like XLink, SVG, Dublin Core, etc. for their own proprietary tags. These standards were expressly produced because they represent reusable patterns that many document formats need but which shouldn't be respecified by each of them. The upshot is that parsing OpenXML will be a massive pain in the butt because none of your existing scripts / tools / editors etc. that may have built-in knowledge of existing standards will not work with OpenXML.

That would be nice, but.... (1)

j3one (949806) | more than 7 years ago | (#15254177)

"There's no doubt that this broad vote of support will serve as a springboard for adoption and use of ODF around the world..."

mmm, we shall see..

This gives me more amunition. (3, Insightful)

bogaboga (793279) | more than 7 years ago | (#15254184)

This vote will certainly give folks like me more amunition to take on companies like Microsoft at home. With this development, I can push for the following line:

..."The software must be able to read and write the OpenDocument format approved by ISO/IEC"...

The parties involved I believe will be in the knowledge that this standard ie free for all to implement. Kudos to ODF.

Re:This gives me more amunition. (1)

ZachPruckowski (918562) | more than 7 years ago | (#15254752)

Why is that a troll? He's 100% correct. That ODF is certified gives people more of an impetus to support it. This will mean that OpenOffice/KOffice have a feature that MS Office doesn't, potentially fueling adoption.

Re:This gives me more amunition. (2, Insightful)

Americano (920576) | more than 7 years ago | (#15255030)

Definitely didn't deserve a troll. But I think what you're going to end up seeing is Microsoft will simply support ODF in Office. Yes, yes, they've said they won't. But if they start losing customers to OpenOffice, KOffice, StarOffice, and other competitors that support ODF, you can bet your ass that Microsoft will add support for ODF, and put one of those little "Would you like to save this in the Microsoft Office Default Format, which offers significant advantages over the original ODF specification," nag screens in. Then they can claim complete standards compliance, too.

Microsoft as a company has never struck me as a suicidally dogmatic entity. If everybody demands it, and they start losing their shirt in the office productivity market, they'll adapt and do what they need to to stop the loss. Since they can't "acquire & retire" OpenOffice or other open source competition, they'll have to change their software.

That's free market competition... and that's good for us lowly consumers in the long run. Microsoft cannot "kill" ODF, unless it release a clearly superior competing technical standard.

Re:This gives me more amunition. (1)

moochfish (822730) | more than 7 years ago | (#15255282)

And then the CEO replies:

"ISO? But I think my Nero program can burn MS Word documents into ISOs..."

So much for the list of experts (0, Offtopic)

Viol8 (599362) | more than 7 years ago | (#15254197)

Experts from Boeing, bring them on.
Experts from the Society of Biblical Literature?? Wtf?? What the hell
have they got to do with a computer data formatting standard??

Or did they just require some people who had experience of a
large project at its Genesis.

Re:So much for the list of experts (4, Insightful)

Kadin2048 (468275) | more than 7 years ago | (#15254276)

Actually if you think in terms of who's really interested in processing, archiving, and dissemenating large volumes of text to people all over the world, it's not hard to imagine that religious organizations would be at the top of the list. They have huge archives, and probably desire both interoperability and stability (no "format of the week" syndrome).

It's honestly tough to find many organizations that really are thinking past the next quarter or fiscal year; in most industries people are buying software and hardware for the here-and-now. If that document isn't accessible in 15 years, who cares? Outside of their mandated recordkeeping obligations (Sarbannes-Oxley, etc.) a lot of large commercial organizations probably wouldn't care if their documents were written with magic disappearing ink that rendered them unreadable in a few years or a decade. (To be fair, the majority of commercial text is probably nothing that you'd want to read in a decade -- memos, meeting minutes, reams of emails; most of it probably makes little sense outside its original context anyway.)

I think this attitude is shortsighted, but it's pervasive. Nobody wants to think about long-term storage, nobody wants to think about accessibility 10 or 20 or 100 years from now, except libraries, governments, and religious institutions. (And perhaps some of the very largest and longest-lived corporations.) So it makes sense that if you're designing a data format that you want to be around for a while, you'd want to bring on board the people who have the most interest in making it successful.

Re:So much for the list of experts (1)

Kjella (173770) | more than 7 years ago | (#15254477)

Outside of their mandated recordkeeping obligations (Sarbannes-Oxley, etc.) a lot of large commercial organizations probably wouldn't care if their documents were written with magic disappearing ink that rendered them unreadable in a few years or a decade.

Well, I think a lot of them would appriciate it if they were (See Sarbannes-Oxley) ;)

Re:So much for the list of experts (1)

ZWithaPGGB (608529) | more than 7 years ago | (#15254530)

Actually, companies have a vested interest in aging (IE: Destroying or rendering illegible) documents. There's an entire industry, dubbed "document retention" [kahnconsultinginc.com] that is actually focused on destroying anything that might be used as evidence in a legal proceeding as soon as it is no longer needed or as soon as legally allowable, whichever comes first.

Re:So much for the list of experts (1)

Kadin2048 (468275) | more than 7 years ago | (#15255214)

I actually almost mentioned that in my original post, but decided it wasn't directly relelvant to my point.

But if you look at the selling points of most corporate/enterprise email systems (e.g. Notes, Exchange), one of the big features is "document expiration." You can make it so that emails just magically disappear after some preset time...unless of course someone printed them out or saved them to a text file. Although I would expect that future systems might prevent this on certain documents (if they don't already -- I can imagine a "no print / no save" flag that was really enforced would be a big feature). Ignoring how hard that would be to effectively implement, of course.

Re:So much for the list of experts (1)

mrchaotica (681592) | more than 7 years ago | (#15254534)

Actually if you think in terms of who's really interested in processing, archiving, and dissemenating large volumes of text to people all over the world, it's not hard to imagine that religious organizations would be at the top of the list. They have huge archives, and probably desire both interoperability and stability (no "format of the week" syndrome).
Exactly -- religious organizations are used to trying to piece together tattered 1000-year-old manuscripts that were written in a dead language and with faded ink. They're certainly extremely interested in preventing the need to do the equivalent of that (e.g. deciphering a binary MS Word 2.0 document) again in the future.

Re:So much for the list of experts (5, Insightful)

AKAImBatman (238306) | more than 7 years ago | (#15254348)

Experts from Boeing, bring them on.
Experts from the Society of Biblical Literature?? What have they got to do with a computer data formatting standard??


Isn't it obvious? Literary organizations have massive numbers of documents that need to be digitized and archived in perpetuity. As a result, they have a vested interest in using standardized formats that will be guaranteed to meet their needs for years to come. The Society of Biblical Literature is no different in these respects, especially as more and more fragments of apocryiphal and gnostic texts continue to be found.

Re:So much for the list of experts (-1, Flamebait)

Viol8 (599362) | more than 7 years ago | (#15254385)

"more fragments of apocryiphal and gnostic texts continue to be found."

Hmm , yes , they'd take up at least a few pages of A4. Besides which, I
wouldn't trust religious nutters with any kind of standard to be used
by normal people.

Re:So much for the list of experts (1)

AKAImBatman (238306) | more than 7 years ago | (#15254540)

You sir, are being an ass. Did it ever occur to you that historical archives of all types are useful to more than just "religious nutters"? Or do you believe that the Odyssey should be discarded as a "work of religious nutters"?

Would it injure you to use that lump of grey matter between your ears before making such an inane comment?

Re:So much for the list of experts (1)

cduffy (652) | more than 7 years ago | (#15254711)

Not everyone who's religious (or who studies religion) is a raving lunatic intent on making the rest of the world do things their way Because God Said So; it's just that set which makes the press, generates the noise, and results in bad laws and worse politicians being foisted off on everyone else.

Characterizing a scholarly society revolving around the study, analysis, archival and dissemination of historical documents as a group of "nutters", without any evidence other than the area of study those documents correspond to... well, it's certainly inappropriate. Religion is an entirely legitimate field of study, even should you consider it bogus in every other way.

Re:So much for the list of experts (0)

Anonymous Coward | more than 7 years ago | (#15254436)

Experts from the Society of Biblical Literature?? Wtf?? What the hell have they got to do with a computer data formatting standard??

That's kind of an arrogant thing for you to say. How much thought did you put into this? zero? yeah, I'm not really surprised.

Could it be *gasp* that the society of biblical literature has been creating massive, complicated crossreferenced documents for hundreds of years, and that maybe that have more value to add to this than a company like boeing? That you reject them out of hand says a lot about you.

Re:So much for the list of experts (2, Insightful)

jmorris42 (1458) | more than 7 years ago | (#15254589)

> Experts from the Society of Biblical Literature?? Wtf?? What the hell
> have they got to do with a computer data formatting standard??

Oh I dunno. ODF had as design goals support for longterm document storage and seamless internationalization support. I suspect the Society of Biblical Literature has an interest in both. Unless you are so ignorant that you believe Moses and Jesus spoke the English of King James that is. You probably wouldn't believe just how many languages and scripts the original texts are written in. If ODF can deal with all of those it shouldn't have a problem with any of the modern encodings.

And if you know of anyone with older documents, and likely to still be using them a thousand years hence, speak up.

Re:So much for the list of experts (1)

Cal Paterson (881180) | more than 7 years ago | (#15254772)

It's literature isn't it? After all, it's one of the most popular books of all time. And by that, I do mean all time. I'm fiercly non-religious, but lets be honest, the Christian Bible is a pretty widely read book.

One small standard for a man (-1, Flamebait)

From A Far Away Land (930780) | more than 7 years ago | (#15254214)

One giant leap for Standardkind.
The movement toward OpenDocument in the free world, warms the open cockles of my heart.

I can't wait to kiss the incompatible document blues goodbye. The article earlier today on Slashdot on how to format your document without using the TAB key will be obsolete, praise be. Imagine? We're being told not to use the tab key, in this "Microsoft competes with the rest" world.

Re:One small standard for a man (2, Insightful)

Red Flayer (890720) | more than 7 years ago | (#15254357)

"The movement toward OpenDocument in the free world, warms the open cockles of my heart. (Emphasis mine)

I sure hope the chambers of your heart aren't open, you might want to visit the doctor if so.

But if the cockles you're referring to are the bivalve mollusc kind, they are always open -- cockles don't shut. However, they are hermaphroditic and they can jump. Which still presents a problem for your cardiac health.

Seriously, though, formal recognition of this standard removes one of the obstacles to widespread implementation of non-MS office software. The bigger hurdle, of course, is retraining & support expenses (for businesses) and factory (or pre-purchase, anyway) installation of the software (for home users).

This doesn't change the fact that MS formats are the de facto standards in use, but it may help unify the communities that use non-MS formats, leading to a larger install base.

Whoa there, Mr. Snarky. (1, Troll)

biendamon (723952) | more than 7 years ago | (#15254217)

OpenDocument is a good format, both from a user standpoint and a technical one. How you feel about OpenXML is another matter entirely, but it's not the one that just got voted in as an ISO standard.

Re:Whoa there, Mr. Snarky. (1)

cduffy (652) | more than 7 years ago | (#15254763)

Hey -- how was I supposed to get the editors' attention without some offtopic comment? When in Rome, and all that.

(More seriously, though... how big of an advantage being an ISO standard is to OpenDocument in terms of adoption is likely to vary with whether OpenXML ends up being ratified as well).

Good news (3, Interesting)

spectrumCoder (944322) | more than 7 years ago | (#15254330)

If Microsoft implements OpenDocument (or anything like it) in Office 2007 it will make a lot of people very happy.

A blank Word document takes up eleven kilobytes, and a one page document takes up about forty. If this becomes the de facto standard for documents rather than the Word document format, then document file sizes will shrink significantly, and a lot of bandwidth and disk space on office networks will be saved as a result.

Re:Good news (1, Interesting)

tomstdenis (446163) | more than 7 years ago | (#15254369)

I'm writing a book in Word [yeah I know, shudder] and my 57 page 2nd chapter is about 340KB on disk. It sports 10 figures, lots of styles (from normal paragraphs, to emphasis to source code etc...)

Maybe you put high res graphics and are using tracked changes?

Tom

Re:Good news (5, Informative)

CastrTroy (595695) | more than 7 years ago | (#15255117)

I just tried it. MS Word 2002. New document no text, 20 KB. 500 Words from Lorem Ipsum, 23 K. 300 pages of that same first page repeated. 1,128 KB. OpenOffice.org 2.0. New Document no text, 6 KB. Same 500 words from lorem ipsum, 10 KB. 300 Pages of repeated text, 22 KB. Wow, too easily compressed. Lets try 300 pages of non repeated text. 329 KB. You save quite a bit. I find that once you start adding images and other things like that, you end up saving even more space.

Re:Good news (0)

Anonymous Coward | more than 7 years ago | (#15255230)

Part of this has to do with the fact you can do this:

unzip filename.odt

Which I think was actually a pretty sensible thing for them to do. XML is quite verbose and can benefit greatly from even minimal compression.

ODF makes sense (1)

MarkWatson (189759) | more than 7 years ago | (#15254371)

I joined the OpenDocument Format Alliance (http://www.odfalliance.org/) recently - partly to keep connected with what they are doing and partly to support a good cause.

I understand how entrenched Microsoft Office is in many organizations but hopefully common sense will prevail - want permanent free access to your data? Then use ODF.

Although I am a 'programming language junky' (I am happily coding away in Ruby and Common Lisp this morning on a new long term AI engagement :-) I always think of systems as data with software added as needed. Seriously, get the data (structures, schema, persistence, etc.) right and the rest is easier. Who would want to build systems around Office formats?

Re:ODF makes sense (0, Troll)

Viol8 (599362) | more than 7 years ago | (#15254455)

"doing and partly to support a good cause."

Err, yeah, right, whatever. I mean obviously millions are worrying about the
data format of word documents and I heard that even Bono discussed it with
major world leaders and the Pope recently , but personally if I'm going to
support a good cause I got for stuff like oxfam or the red cross or cancer
research. Call me old fashioned if you will.

Re:ODF makes sense (1)

MarkWatson (189759) | more than 7 years ago | (#15254925)

my wife and I donate to Heffer project, Habbitat for Humanity, and the American Friends Service Committee (AFSC) every month with automatic credit card payments

If you don't think that open document formats, an open internet, etc. are not important, you certainly have the rights to your own opinions.

Re:ODF makes sense (1)

squallbsr (826163) | more than 7 years ago | (#15254616)

Who would want to build systems around Office formats?

Chills just ran up and down my spine!

Formulas? (2, Interesting)

Makzu (868112) | more than 7 years ago | (#15254377)

I just hope that OpenDocument gets its formula standards in order. I've read in a few places that there is very little documentation in the standard proper about how formulas (for spreadsheets) should be stored and used, which could in time cause some compatibility problems. That being said, I'm glad that it was approved by the ISO... maybe in a few years I'll not have to worry about converting from one office format to another ad absurdum.

Showing my ignorance here... (1)

hcob$ (766699) | more than 7 years ago | (#15254475)

But I would wager that microsoft WILL impliment ODF, then they will tag on some extra data that they call DRM and then all of a sudden, you can still be locked out of your documents for not paying microsoft extortion.... eerrrrr... subscription fees!

Re:Showing my ignorance here... (1)

Secrity (742221) | more than 7 years ago | (#15254974)

AFAIK; if somebody tried to extend an ODF format by adding extra things that were not approved by the ODF people who would approve of such things, then the resulting format could not be considered to be an ODF format. It isn't much different from what Microsoft is trying to do now.

Mixed content model... (3, Insightful)

Numen (244707) | more than 7 years ago | (#15254818)

When comparisons between formats remark upon mixed content models compared to non-mixed asking "which would you rather transform" expecting the answer of "mixed" you know a lot of people throwing opinions around on this issue have never actually worked transforming XML.

If you're wanting a human readable document format you have XHTML. Use it and enjoy. If you're producing an interchange format for word processing applications I'll take unambiguous and explicit over ambiguous and implicit even if that is at the expense of human readability.

The MS model uses a manifest to resolve link references, the ODF uses absolute references... this is criticised by Groklaw on the basis of human readability. Not maintainablity, application use, refactoring or normalisation of data.

There are valid problems that can be cited for both formats (I wish for instance MS had stuck with XLink), but this is quickly resolving into another round of MS bad, anything else good. It's emotive and is in most cases prejudged before technical merits are weighed.

I guess I just resent being asked whether I'd prefer to transform a mixed content model by somebody I know has never done so.
Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...