Beta

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Mozilla Tries New "Lorentz" Dev Model

ScuttleMonkey posted more than 4 years ago | from the does-it-provide-synergistic-roi dept.

Mozilla 126

With the recent release of Firefox 3.6, Mozilla has also decided to try out a new development model dubbed "Lorentz." A blend of both Agile and more traditional "waterfall" development models, the new methodology aims to deliver new features much more quickly while still maintaining backwards compatibility, security, and overall quality. Only time will tell if this is effective, or just another management fad. "If the new approach sounds familiar, that's because Unix and Linux development has attempted similar kinds of release variations for iterating new features while maintaining backwards compatibility. HP-UX, for example, is currently on its HP-UX 11iv3 release, which receives updates several times a year that add incremental new functionality. The Linux 2.6.x kernel gets new releases approximately every three months, which include new features as well."

cancel ×

126 comments

Sorry! There are no comments related to the filter you selected.

No (0)

bluefoxlucid (723572) | more than 4 years ago | (#30892986)

The Linux 2.6 model sucks. 2.6, 2.8. 2.10, etc became 2.6.1, 2.6.2, 2.6.3... on short support cycles.

Re:No (1, Informative)

Anonymous Coward | more than 4 years ago | (#30893132)

What's wrong is the absence of 2.7.* where the new features we're getting in the "stable" kernel should have stayed until they were stable.

Re:No (5, Insightful)

Magic5Ball (188725) | more than 4 years ago | (#30893520)

Mozilla have fallen into the classic trap of trying to expand its user base via increasing features, as opposed to keeping its user base by increasing quality.

We don't need new features directly in Firefox. Plugins do that. Remember that long ago the project made a conscious choice to take a performance hit to provide third-party access into the browser via the elaborate XUL and plugins frameworks, to minimize pushing code and features onto users who don't need them.

Re:No (1)

the_womble (580291) | more than 4 years ago | (#30893594)

Mozilla have fallen into the classic trap of trying to expand its user base via increasing features, as opposed to keeping its user base by increasing quality.

It works for Microsoft.

We don't need new features directly in Firefox. Plugins do that. Remember that long ago the project made a conscious choice to take a performance hit to provide third-party access into the browser via the elaborate XUL and plugins frameworks, to minimize pushing code and features onto users who don't need them.

It was a minority browser then. Most people do not install plugins, or install very few. Most people do not want to work out which plugins are incompatible with each other. Most people judge a browser by how good it is out of the box.

Re:No (1)

ByOhTek (1181381) | more than 4 years ago | (#30893650)

So they need to have all the extra bloat as plugins... and then distribute a 'Firefox' with all that stuff in it, and 'mozilla lite' without the bloat.

Actually, if you compile it yourself, cant you turn off most of the bloat?

Re:No (1)

Magic5Ball (188725) | more than 4 years ago | (#30893916)

Actually, if you compile it yourself, cant you turn off most of the bloat?

You or I could probably piece together an understanding of how to do that inside a working week or so via this list of fragmented documentation:
https://developer.mozilla.org/Special:Tags?tag=Build+documentation&language=en [mozilla.org]
but I don't think the rest of their intended audience of hundreds of millions of users should need to do that in order to use an efficient browser. (In the face of mobile phones and other light devices which provide fully capable browsers, pointing out that Firefox uses 200 MB less memory than any other browser is an indictment of both.)

I'd like to use a lited Gecko in a minimal skin, but not enough for me to want to spend the next 5+ years lobbying for it in their Bugzilla. (For reference, it took 10 years for the threaded document windows feature to get looked at, even though it was the standard everywhere else. Bug #40848.)

Re:No (1)

Mr Z (6791) | more than 4 years ago | (#30894928)

Wait... wasn't Firefox the "de-bloated" Mozilla? [wikipedia.org]

Re:No (2, Insightful)

Magic5Ball (188725) | more than 4 years ago | (#30893730)

The elegant solution to too much choice among plugins isn't to revamp the software development workflow, nor is it to load every conceivable feature into the default interface. (Assuming that the opposite were true, Firefox would ship with all 10,000 plugins loaded, which it does not.)

Since Firefox is starting to resemble an operating system anyway, it might be time for Firefox distributions, which default to a core consisting of functions expected of every browser, along with the small number of exceptional features/plugins/whatever which differentiate Firefox from everything else in a good way. (That would also give users tangible reasons to choose and stick with Firefox.) Otherwise, more features for the sake of more features leads to office productivity suites in which most users must download, install and load but will never use 95% of the available features.

Re:No (1)

morgan_greywolf (835522) | more than 4 years ago | (#30893768)

It works for Microsoft.

That would be a valid argument if Microsoft and the Mozilla Foundation had similar goals; they do not. Microsoft is a multibillion-bollar global company intent on making money; Mozilla just wants to make a browser and a few related applications.

It was a minority browser then. Most people do not install plugins, or install very few. Most people do not want to work out which plugins are incompatible with each other. Most people judge a browser by how good it is out of the box.

There's a word for that. It's called 'integration'. You add new features as extensions and then you provide browser packages which include a few extensions installed by default. That way, those who do not want a particular feature can remove it.

Re:No (2, Informative)

Magic5Ball (188725) | more than 4 years ago | (#30894212)

Mozilla Corporation's goals are substantially to do activities which bring in revenue, as with Microsoft. Mozilla's main vehicle for doing so is to package and distribute a browser through which income is generated via Google searches. To maximize revenues, they need to maximize both market share and usage of their browser.

The new focus, maximizing market share (quantity), could help, but not as much as a new strategy which maximizes both market share and usage (quality). Under those 30 MB or so of binaries, libraries and other stuff, I'm sure exists a small feature subset set which would give all Internet users a compelling reason to switch to and stick with Firefox, if that feature subset were promoted correctly.

Based on their list of "new features", http://www.mozilla.com/en-US/firefox/features/ [mozilla.com] , they don't seen to know what makes Firefox special.

Private Browsing - "Surf the Web without leaving a single trace." Which inaccurately describes the functionality. Safari and Chrome can forget things in the browser, but the Firefox feature as stated requires third-party anonymising proxies or such.
Password Manager - "Remember site passwords without ever seeing a pop-up." - Most browsers have had features to remember form fields since the 1990s?
Awesome Bar - "Find the sites you love in seconds (and without having to remember clunky URLs)." This is better than bookmarks/favourites?
Super Speed - "View Web pages way faster, using less of your computer’s memory." OK.
Anti-Phishing & Anti-Malware - "Enjoy the most advanced protection against online bad guys." ... Using substantially the same database as IE and Chrome.
Session Restore - "Unexpected shutdown? Go back to exactly where you left off." Tires unexpectedly fall off? Can I have a browser that doesn't need to have this feature?
One-Click Bookmarking - "Bookmark, search and organize Web sites quickly and easily." It doesn't trouble me to click once or twice to bookmark something in any other browser, but OK...
Easy Customization - "Thousands of add-ons give you the freedom to make your browser your own." OK. But there's a handy guide to navigate those thousands, right?
Tabs - "Do more at once with tabs you can organize with the drag of a mouse." Like every other browser?
Personas - "Instantly change the look of your Firefox with thousands of easy-to-install themes." Instantly change the look of your windshield with thousands of ... No thanks. We got here without bringing MySpace all the way to the desktop. Also does not enhance the functionality of the browser.

Re:No (1)

morgan_greywolf (835522) | more than 4 years ago | (#30894884)

Mozilla Corporation's goals are substantially to do activities which bring in revenue, as with Microsoft. Mozilla's main vehicle for doing so is to package and distribute a browser through which income is generated via Google searches. To maximize revenues, they need to maximize both market share and usage of their browser.

Mozilla Corporation is a wholly-owned susidiary of the Mozilla Foundation. As such, it's revenue-raising activities are limited in scope to assist in raising money for the Foundation and to fund development activities for Mozilla's projects.

As such, Mozilla Corporation is profit seeking only in as much as it furthers Mozilla Foundation's goals.

Under those 30 MB or so of binaries, libraries and other stuff, I'm sure exists a small feature subset set which would give all Internet users a compelling reason to switch to and stick with Firefox, if that feature subset were promoted correctly.

See Epiphany and Chrome/Chromium. These projects are exactly that, but they're based on the leaner and faster Webkit libraries, rather than Gecko. Firefox aims to be a full-featured product, as you mention; the others aim at being lightweight and promoting Web standards, which were Firefox's original goals.

Re:No (1)

VGPowerlord (621254) | more than 4 years ago | (#30894984)

Firefox aims to be a full-featured product, as you mention; the others aim at being lightweight and promoting Web standards, which were Firefox's original goals.

I seem to recall those being Mozilla Suite (now Seamonkey)'s goal. FireFox's goal was to be a lighter-weight version of the Mozilla Browser.

Re:No (1)

icebraining (1313345) | more than 4 years ago | (#30898472)

Like H.264?

Re:No (0)

Anonymous Coward | more than 4 years ago | (#30893744)

A new browser will rise from the ashes, bloat free and blazing fast. Like the bird of myth, we will call it Phoenix! Oh wait...

Re:No (1)

Runaway1956 (1322357) | more than 4 years ago | (#30894432)

About:plugins

Peripheral attachments to a central code structure, which slow down the core code, consume resources, and cause potential errors. Note that it is possible to create and publish potentially malevalent plugins, witness Microsoft's recent dotnet plugin fiasco.

'Nuff said?

Re:No (1)

Magic5Ball (188725) | more than 4 years ago | (#30894800)

If the plugin architecture has become a problem (and it has due analogs of shared memory and lack of process isolation leading to potential security issues), then they should work to revise or remove it. Moving to an gushing agile waterfall feature stream or whatever development and release paradigm isn't a plausible solution.

Firefox reminds me of Windows 3.1 in an uncomfortable number of ways. Besides their co-operative multi-tasking environments and storing system settings in several different places, they share in common a very low barrier to entry with respect to writing userland apps (copy and paste some XML and JS to make a Firefox plugin, draw some boxes and remember BASIC to make a VB3 app) which enabled almost unlimited customization to occur very easily. This is good for adoption and getting onto many desktops, but leads to an ecosystem rich in poorly conceived and implemented applications which are vulnerable to each others' weaknesses.

I've seen discussions about how the plugin architecture and XUL may be revised to get rid of some of these concerns (and to get rid of some of the 5-12 layers of indirection and abstraction for some routine function calls), but I'm not confident that they will completely avoid the downfalls of analogous architectural changes made in Windows 95.

Forced add-on updates (4, Insightful)

roman_mir (125474) | more than 4 years ago | (#30894534)

You know that it is silly, that every time a new version of FF comes out, every add-on author has to up the version on his code and resubmit to amo? Most of the changes from version to version of FF does not affect most addons at all and yet there is this whole thing with addons having to be resubmitted, wait in the queue for weeks and at the end the only change in the new version is the maxVersion tag in the installation rdf.

On the other hand there is now talk of completely changing the system of interfaces between addons and the browser. Who has time and interest to rewrite the same thing over and over again?

Re:Forced add-on updates (1)

BitZtream (692029) | more than 4 years ago | (#30898836)

I cheat.

My addon is not on the mozilla website, and specifies its max version as 10.*.

Firefox doesn't bitch about anything when a new version comes out.

There are solutions to retarded developers within Mozilla today just like there were 10 years ago with Netscape.

Re:No (2, Informative)

msclrhd (1211086) | more than 4 years ago | (#30896466)

How do you know that Mozilla are not improving quality? If you pay attention, Mozilla are improving the quality of the codebase (memory consumption/leak fixes, crash fixes, etc.).

And while plugins do add some features, what about HTML5 support? Support for SMIL animations in SVG? Out of process plug-ins? Better JavaScript performance? Support for additional emerging and evolving standards? Better OS integration on Windows, Mac and Linux? Hardware-accelerated page rendering? WebGL support? And much more.

Re:No (4, Informative)

electrosoccertux (874415) | more than 4 years ago | (#30893338)

The Linux 2.6 model sucks. 2.6, 2.8. 2.10, etc became 2.6.1, 2.6.2, 2.6.3... on short support cycles.

You, sir, do not seem to know the nightmare that maintaining separate kernels, and porting features and bugfixes back and forth, created.

Re:No (2, Insightful)

Anonymous Coward | more than 4 years ago | (#30893568)

Why does the Linux community have so much trouble with it, while nobody else does?

FreeBSD, for instance, manages to have several major-number releases in use at any given time. FreeBSD 9.x is in development. FreeBSD 8.x is the recommended production release. But even FreeBSD 7.x is still supported. Not only that, FreeBSD manages to get out several point releases each year, in addition to a major release. But it has none of the problems you mention.

Maybe it's a maturity thing. The FreeBSD development community is made of very talented and very experienced developers who know that you shouldn't just throw patches and features around willy-nilly. The FreeBSD user community is also more mature, willing to wait a short while for a feature to become available through the natural release cycle.

Re:No (0, Flamebait)

Anonymous Coward | more than 4 years ago | (#30893674)

Please feel free to maintain and distribute your own brand of well-supported old kernels. Oh. You don't see a good economic reason to do this yourself? Neither does anyone else.

Re:No (0)

Anonymous Coward | more than 4 years ago | (#30897204)

FreeBSD 7 came out in 2008, right? I really don't see how that is special... I built a media server on Ubuntu in 2006 because of their support promise: the support will only end in mid-2011.

Re:No (1)

snadrus (930168) | more than 4 years ago | (#30898004)

Less an expert, more a pattern-finder response:

In the world of Free (Gratis + Libre) open source software (FLOSS?) there's little need to waste time patching an older system when everyone has free access to a newer system that's backward compatible.
That job is left to distribution owners (like Ubuntu whose October 2008 "LTS" is still patched by them).

This process optimization that allows faster developer progress (including testing) to mean more frequent improvements.

Re:No (1)

turbidostato (878842) | more than 4 years ago | (#30898516)

"In the world of Free (Gratis + Libre) open source software (FLOSS?) there's little need to waste time patching an older system when everyone has free access to a newer system that's backward compatible."

It's only that in too many times it tends to be *not* so backward compatible.

Re:No (4, Interesting)

morgan_greywolf (835522) | more than 4 years ago | (#30893572)

You're both right. New features getting adding to the stable kernels have done much to reduce stability between kernel versions. So much so that distros have had to pick up the slack by introducing an increasing number of patches. Have you ever looked at the patchset list for Ubuntu? There have been like 17 different kernel patchlevels for Karmic Koala since it was released in October. That's more than one patchset a week, and each patchset can have anywhere from 1-10 patches.

Re:No (3, Informative)

Anonymous Coward | more than 4 years ago | (#30894096)

Patch levels don't start at 1 the day of release, they start the day they start working on the next branch. The kernel included in the installation CD was at patchset 14, the latest released one is 17 (However there were 2-3 updates that didn't change the patch level). And Lucid is already at -11 (see http://changelogs.ubuntu.com/changelogs/pool/main/l/linux-meta/linux-meta_2.6.32.11.11/changelog and http://packages.ubuntu.com/lucid/linux-image).

Re:No (-1, Redundant)

Kjella (173770) | more than 4 years ago | (#30895738)

Mod parent up:

Patch levels don't start at 1 the day of release, they start the day they start working on the next branch. The kernel included in the installation CD was at patchset 14, the latest released one is 17 (However there were 2-3 updates that didn't change the patch level). And Lucid is already at -11 (see http://changelogs.ubuntu.com/changelogs/pool/main/l/linux-meta/linux-meta_2.6.32.11.11/changelog [ubuntu.com] and http://packages.ubuntu.com/lucid/linux-image [ubuntu.com] ).

Re:No (1)

morgan_greywolf (835522) | more than 4 years ago | (#30898172)

You're right, they don't. I'm being over-general here, of course. But patch levels do start at one during development of the distro, and that they went through 14 patchlevels before release day is still very telling.

Re:No (3, Informative)

Cyberax (705495) | more than 4 years ago | (#30893352)

2.4/2.5 model sucks, because we have to wait years before features propagate to the stable mainline kernel. Or have to resort to backporting and vendor branches.

Re:No (5, Insightful)

diegocg (1680514) | more than 4 years ago | (#30893796)

It's not the waiting what sucks. What sucks is that the old development model was more unstable. For big projects linux with a lot of activity, long development cycles just don't work. You don't have releases, so users don't test it. Once you get out the first stable release, users notice that it's very buggy (but you still don't know all the bugs, because most users and distros are still not using it because it has too many bugs), and it takes a full year to get the codebase into a decent shape. That's what happened with Linux 2.6. They had been dropping thousands of LoC for a couple of years. Because it's a "unstable cycle", quality was not so important, the main tree was used as a repository for "work in progress" code, and even if it was important (which it isn't, even it there's a corporate policy that says that it must be) you can't measure the quality of the code, because the users are not using it.

The new model, in the other hand, allows new feaures in every release, but it's much easier to track regressions compared to the previous model. The new features are required to have some quality, they can't have serious bugs, maintainers must agree that they can be merged, and they only can be merged in the first two weeks of the 3 months development period. It allows to make progress faster, and at the same time bugginess is controlled more easily. Previously, you had a huge diff of several MB, users reporting that the huge diff was causing several bugs and regressions in their systems, and developers had to start debugging the alpha code they had written, and had not tested, two years ago. IMO, long term, it's much better for everybody. It's not surprising that FreeBSD and Solaris are using this model too, it makes sense for Mozilla to use it aswell.

Re:No (1)

gmack (197796) | more than 4 years ago | (#30897688)

It was worse than that. The lag prompted several maintainers and distros to backport changes and the result was *two* unstable branches. I recall trying to get a new server online only to discover that the old kernel would crash on boot and the new kernel would crash randomly afterwords.

With the new development model it has been much easier for me to keep stable systems.

Re:No (2, Informative)

Luke has no name (1423139) | more than 4 years ago | (#30893718)

I concur. The "Major.Minor.Bugfix" version scheme is much more informative than Linux's arbitrary "2.6.iteration" format. The 2.6 part doesn't even matter anymore.

Major number changes with breaks in back compatibility, changes in the direction of development, major new features/architecture, etc.

Minor number changes within Major number with new features but does not affect compatibility with same Major version. Do not take away features (e.g. no regressions)

Bugfix number changes within Minor number when no new features are added, code has simply changed or bugs fixed.

Re:No (1)

mwvdlee (775178) | more than 4 years ago | (#30893920)

Or how about switching to the system the commercial world uses?

Use internal version numbers as suggested by you (or something similar, doesn't really matter) and let the marketing folk handle public version numbers.

Sad but true, typical end-users look at Firefox 3 and compare it to MSIE 8; MSIE is 5 more than FireFox, so therefore it MUST be better.

I think Windows' internal version is still at 5 or 6 or so, whereas the public number is now 7 (after the somewhat non-numeric Vista and XP and about 2001 less than 2008).

Internal version numbers are of no concern to typical end-users; they simply don't care about minor numbers. Just tale Ubuntu's system for all I care; atleast it's easy to the end-user.

Re:No (1)

nstlgc (945418) | more than 4 years ago | (#30895624)

Windows' internal version is still at 6 mostly for application compatibility reasons, nothing else.

Re:No (1)

mwvdlee (775178) | more than 4 years ago | (#30895992)

So apparently "6" is the correct major version as defined by "only changes when backwards compatibility breaks".

Re:No (1)

Luke has no name (1423139) | more than 4 years ago | (#30896754)

So apparently "6" [in the Linux kernel] is the correct major version as defined by "only changes when backwards compatibility breaks".

Touché, good sir.

Re:No (0)

Anonymous Coward | more than 4 years ago | (#30893784)

The support cycles aren't small. The linux kernel dev team simply focuses on getting a stable version out and then offload the kernel's mantenance to whomever wishes to use it, whether it's distro maintainers or your average kernel-compiling nerd. And besides, who forces you to upgrade each time a point release is commited?

Chaotic releases? (3, Funny)

Vornzog (409419) | more than 4 years ago | (#30892998)

Is this chaotic release schedule supposed to be more attractive?

Re:Chaotic releases? (4, Interesting)

Nerdfest (867930) | more than 4 years ago | (#30893106)

Releasing when a feature is ready sounds both chaotic and reasonable. Chaotic is not neccessarily bad.

Re:Chaotic releases? (4, Funny)

TheLink (130905) | more than 4 years ago | (#30893298)

Was that a butterfly wooshing by? ;)

Re:Chaotic releases? (1)

idontgno (624372) | more than 4 years ago | (#30893408)

I'm sure it was. It was looking for a flower named "Lorenz" [wikipedia.org] , not "Lorentz".

TBH, I have no idea where Moz got the name. The only Wikipedia hit I got was for the Lorentz Transform [wikipedia.org] , which is the equivalence and mutual convertability of different relativistic frames of reference. Is this Moz's way of saying "we'll all be going at different relativistic speeds, accelerations, and frames of reference"?

O_o

Re:Chaotic releases? (1)

Vornzog (409419) | more than 4 years ago | (#30893922)

A perfectly good pun, ruined by a voiceless consonant in an uncommon english digraph.

I'm sure you wouldn't have noticed that extra letter if you were moving at 2/3 of the speed of light.

Hooked on phontonetics worked for me.

Re:Chaotic releases? (1)

Vornzog (409419) | more than 4 years ago | (#30893936)

Apparently I really can't spell.

Photonetics.

Grr.

Re:Chaotic releases? (1)

maxwell demon (590494) | more than 4 years ago | (#30895244)

Well, they hope that all their bloat is reduced through Lorentz contraction if they just develop fast enough.

Re:Chaotic releases? (1)

anaesthetica (596507) | more than 4 years ago | (#30897986)

All of their recent codenames have been parks: Shiretoko [wikipedia.org] , Namoroka [wikipedia.org] , and now Lorentz [wikipedia.org] .

Re:Chaotic releases? (1)

Nerdfest (867930) | more than 4 years ago | (#30893442)

I think it was actually the whole thunderstorm.

Re:Chaotic releases? (2, Funny)

morgan_greywolf (835522) | more than 4 years ago | (#30893668)

Releasing when a feature is ready sounds both chaotic and reasonable. Chaotic is not neccessarily bad.

Three words: Duke Nukem Forever.

*ducking*

the universe is predjudiced! (2, Funny)

Thud457 (234763) | more than 4 years ago | (#30896174)

Obviously, DNF, if completed, would have had some sort of feature that generated Higgs bosons.

Re:Chaotic releases? (4, Funny)

oGMo (379) | more than 4 years ago | (#30893832)

So what you're saying is that this model is chaotic good instead of chaotic evil ...

Re:Chaotic releases? (1)

Pulse_Instance (698417) | more than 4 years ago | (#30894076)

That will work fine until a butterfly happens to set the evil bit on the hard drive flipping the chaos from good to evil. http://xkcd.com/378/ [xkcd.com]

Re:Chaotic releases? (1)

Tumbleweed (3706) | more than 4 years ago | (#30894048)

Chaotic is not neccessarily bad.

Remember you said that when the dinosaurs are chasing you.

Re:Chaotic releases? (1)

Thoughts from Englan (1212556) | more than 4 years ago | (#30894236)

The technique certainly sounds strangely attractive.

Re:Chaotic releases? (1)

turbidostato (878842) | more than 4 years ago | (#30898578)

"The technique certainly sounds strangely attractive."

Where's the "+1: Nerdly funny" when you need it!?

Re:Chaotic releases? (3, Funny)

Eudial (590661) | more than 4 years ago | (#30893470)

Again, I request a "+1 Badum-tish"

Pointless, unles... (1, Funny)

Anonymous Coward | more than 4 years ago | (#30893014)

Can I use the theory of special relativity to get out of missed deadlines? Sure, we are way behind in this frame of reference. But as viewed from a different frame of reference traveling near the speed of light relative to us we shipped yesterday!

Buzz (1)

oldhack (1037484) | more than 4 years ago | (#30893018)

God forbids if a name should suggest something of substance.

Right. So they're going to fuse six sigma (1)

wiredog (43288) | more than 4 years ago | (#30893026)

with lean methods?

Where have I heard this before? [dilbert.com]

Why don't we focus on learning (0, Troll)

Zero__Kelvin (151819) | more than 4 years ago | (#30894446)

to post properly on Slashdot before we try to be insightful and funny in a single line?

The branch is Lorentz, not the development model (5, Informative)

Anonymous Coward | more than 4 years ago | (#30893134)

All of the Firefox branches are named after national parks... the name has nothing to do with the development model.

http://en.wikipedia.org/wiki/Lorentz_National_Park

Re:The branch is Lorentz, not the development mode (1)

daveime (1253762) | more than 4 years ago | (#30893230)

And there was me thinking it was named after EVE T1 salvagable materials ... looking foward to the "Burned Logic Circuit" version ... oh, wait, maybe we've already had that one, it makes your RAM fry from overuse.

Re:The branch is Lorentz, not the development mode (1)

Monkeedude1212 (1560403) | more than 4 years ago | (#30893332)

Mozilla Power Conduit. Its so bloated you'll need a new Power supply!

Re:The branch is Lorentz, not the development mode (1)

NeoSkandranon (515696) | more than 4 years ago | (#30893424)

Mozilla Alloyed Tritanium Bar oughta be great then

And they are moving their servers to Amman (2, Funny)

BancBoy (578080) | more than 4 years ago | (#30893152)

Management has dubbed the new scheme - Lorentz of Arabia!

Thank you, thank you, I'll be here all week! Try the lamb!

Scheduling (5, Funny)

jpmorgan (517966) | more than 4 years ago | (#30893160)

Plus, with the "Lorentz" transformation, time dilation makes it a lot easier to hit release dates. But there has been some concern over the developers' sudden weight gain.

Re:Scheduling (0)

Anonymous Coward | more than 4 years ago | (#30893234)

... You know that people do not gain more weight as a side effect of time dilation?

Although their energy does seem to increase which should in turn make them more productive!

E = mc^2 only for velocity ~= 0. (2, Informative)

FooAtWFU (699187) | more than 4 years ago | (#30893626)

The "weight gain" is due to an abuse of the equations:

The equation Einstein came up with more than a century ago can be considered a degenerate form of the mass-energy-momentum relation for vanishing momentum. Einstein was very well aware of this, and in later papers repetitively stressed that his mass-energy equation is strictly limited to observers co-moving with the object under study. However, very, very few people seem to have paid attention to Einstein's warnings, nor to any of the more recent warnings. Even worse, the vast majority of authors of popular science books take great liberty in applying E=mc^2 to objects moving at speeds close to the speed of light, and then declare mass to increase with velocity in an attempt to recover consistency in what has become an incoherent mix of relativistic and Newtonian dynamics. Theoretical physicist Lev Okun refers to this practice as a "pedagogical virus". ..... What I consider truly amazing, is how few people are aware of the mass-energy-momentum relation.

-- What's Wrong with E=mc^2 [scientificblogging.com] , The Hammock Physicist.

Our blogger then proceeds to draw a right triangle with sides E*v, E*c, and m*c^3. For velocities (v) of 0, E*c=m*c^3, or E=mc^2. Yay vectors.

Re:E = mc^2 only for velocity ~= 0. (1)

jpmorgan (517966) | more than 4 years ago | (#30894686)

That depends entirely on how you define mass. Invariant mass doesn't change. Relativistic mass, (i.e., an object's resistance to deflection in spacetime), does.

But at the macroscopic level invariant mass is a convenient fiction, unless you're dealing with something at absolute zero. If not then guess what: the invariant mass includes the object's heat expressed in kinetic energy.

Re:E = mc^2 only for velocity ~= 0. (1)

turbidostato (878842) | more than 4 years ago | (#30898616)

"Invariant mass doesn't change."

Uhhh... that *might* explain why they chose such a flabbergasting adjective... "invariant"!

Other way around (1)

Roger W Moore (538166) | more than 4 years ago | (#30895158)

I think you'll find that it is the otherway around: release dates will get a lot harder to hit because less time appears to pass for the fast moving developers compared to the rest of the planet. Also mass (not weight!) is an invariant quantity so there will be no change. Yes I know that a lot of people often think that the mass increases but it does not the 'gamma' factor in momentum comes from the velocity NOT from the mass which is why things like "F=gamma ma" do not work.

Lorentz attractors? (0)

Anonymous Coward | more than 4 years ago | (#30895316)

Yah... but with Lorentz attractors I would expect it to run in circle and go nowhere...

Development cycles (5, Insightful)

girlintraining (1395911) | more than 4 years ago | (#30893208)

the new methodology aims to deliver new features much more quickly while still maintaining backwards compatibility, security, and overall quality.

A style of management is only as good as its manager(s). We've had many, many methods of improving all three of those but as an industry we routinely and repeatedly turn it down for most applications over cost considerations. A new hybrid model of development won't change this -- continual pressure from inside the organization will eventually subvert any gains at the process level. Senior level management has to push this from the start -- only then would this or any other kind of methodology have a chance at achieving its goals.

Waterscrum (5, Interesting)

threemile (215603) | more than 4 years ago | (#30893214)

At Yahoo! we tried this on a few projects and ended up calling it waterscrum. Wanting the dev flexibility of agile and the (perceived) business certainty of waterfall at the same time isn't really possible when it's not understood that the dev methodology has impacts outside of the tech organization. If you're doing agile dev, the marketing materials, sales collateral, etc are much more difficult to write and lock down when you're looking to make a splash in the market. For agile to work the entire company needs to be okay with some level of uncertainty, or at least understand that for major market releases you still need to plan a date far in advance. Just because you're launching code doesn't mean you're launching a product, and getting materials locked down is harder to do when, by definition, changes happen more frequently.

Re:Waterscrum (2, Interesting)

weicco (645927) | more than 4 years ago | (#30894178)

Dang! I thought I had perfect idea how to mix waterfall model with agile development. I started writing an article about it some months ago but can't get myself to finish it.

Idea was basically that when you start a project you must know at least something about what problem the project tries to solve and there's your goal. When the goal is at least somewhat clear you write requirements analysis and architectural specification. You can always come back to arch-spec but you have to understand that making dramatic changes means that costs go up and well as development time.

Next thing is to define interfaces. If your application has many different modules you need to define how those modules interact with each other. This helps in next step if there's going to be changes especially inside the modules.

After this we start agile "steps". You define one step or iteration. You write functional spec which sets to the goal that particural step. You can change func-spec when ever there's a need. Changes in the func-spec doesn't necessarily raise costs and development time much, not at least as much as changing arch-spec because changes touches only (hopefully) this one step.

Then I figured out that TDD and CI would be perfect models for this kind of development. With TDD and CI you at least have automatic regression tests which can (and will) be run every time something's changed. When one step is completed and fully tested you go to the next step and so on.

When all the steps are done you check that program meets every requirement and proceed to full system test in a duplicated production environment. If that goes OK then it's time to roll it to production and start sending bills.

But if you have already tested this and found out it doesn't work I think I save myself some time and send my half-baked artice to /dev/null :(

Re:Waterscrum (1)

threemile (215603) | more than 4 years ago | (#30896788)

I wouldn't throw your article out - we certainly didn't disprove it as a methodology, but there are some business impacts that need to be planned for ahead of time.

Re:Waterscrum (0)

Anonymous Coward | more than 4 years ago | (#30897870)

That's Unified Process or one of its variants (RUP,...). Each "discipline" (requirements analysis, architecture, design, code, test, deployment,...) is repeated at each iteration. Depending on the "phase" in the project, each activity gets done in a different proportion (more requirements at begining, iterative refinement of requirements later,...)

three digit version numbers (0)

Anonymous Coward | more than 4 years ago | (#30893278)

Why not simply use the classic versioning?

It's manjor.minor.patchlevel, the first number is incompatible changes, the second is features added and the third is bugfixes.

Perhaps that is just too reasonable, and you cannot expect people to use reason on something publicall visible. (What's so bad about changing the major version shortly if something had again to be changed incompatibly? and why not keep the major version if there is only things added?)

Re:three digit version numbers (1)

EvanED (569694) | more than 4 years ago | (#30893484)

(What's so bad about changing the major version shortly if something had again to be changed incompatibly? and why not keep the major version if there is only things added?)

I would say this: It's because users don't care about incompatible changes (at least to their web browser), and version numbers are as much for the user -- if not more -- than for the devs.

What is going to become incompatible if Firefox upgrades? Basically, plugins. And most of those in popular use will be updated in short order (often before release), and Firefox makes updating those painless.

By contrast, what users expect (reasonably expect, I'll add) is that if a significant feature update is done, or a significant UI update is made, or a large number of bugs are fixed, then the major version number will be updated.

Re:three digit version numbers (1)

Knuckles (8964) | more than 4 years ago | (#30894260)

It's not about the version numbers but the development model. Basically, "do we add some features in the stable branch and make frequent releases, or do all new features go only into a dev branch that gets stabilized and released only rarely?"

Oh god, the still use Waterfall? (5, Insightful)

Hurricane78 (562437) | more than 4 years ago | (#30893466)

The waterfall model is horrible for big projects. I thought everybody knew that and had switched to the spiral model a loong time ago.
And now they add the only thing to it, that in even more horrible? Agile?? Or in other words: Spaghetti coding with the motto: “If perfect planning is impossible, maybe not planning at all will work.”
No, dammit! It’s just as bad.
Maybe that’s why they try to mix them both... To get to the actually healthy middle ground.

But still, it’s silly. We have a perfectly good spiral model. Hell, the whole game industry uses it. (As far as I know.) And it works great, even on those huge 5-year projects. (Notable exception that proves the rule: Duke Nukem Forever.)

Sorry, but that will result in a huge epic failure, and probably Firefox’s death.
Mark my words. :/

Re:Oh god, the still use Waterfall? (0)

Anonymous Coward | more than 4 years ago | (#30893588)

Hell, the whole game industry uses it.

And look at all the buggy game releases... That's not the right path to follow.

Re:Oh god, the still use Waterfall? (0)

Anonymous Coward | more than 4 years ago | (#30893762)

I thought everybody knew that and had switched to the spiral model a loong time ago.

"Everybody" switched to spiral? Not likely. Spiral is for big projects. Agile makes the most sense on smaller projects (and no, agile does not mean "spaghetti coding"). And still there are plenty of projects where waterfall just works. Why would those project teams change?

Re:Oh god, the still use Waterfall? (1)

Magic5Ball (188725) | more than 4 years ago | (#30895182)

Key word: Project.

Projects have defined, achievable end states, and should have built-in mechanisms for winding down. Firefox achieved the objectives of its project, to release a lightweight standards-compatible browser, somewhere around version 2, at which point it should have wound down into a maintenance mode if it were strictly a project.

What we have here is an activity organized by a party which has interests beyond producing a good browser as a project. At the very least, the Mozilla Foundation's and the Mozilla Corporation's desire for their own continued existence and profitability needs to be considered in all discussions about the Firefox leaders' choices.

Re:Oh god, the still use Waterfall? (1)

Chris Mattern (191822) | more than 4 years ago | (#30893780)

I thought everybody knew that and had switched to the spiral model a loong time ago.

Hence the popularity of the term, "Project Death Spiral".

Like Spiral Model is Good? (3, Interesting)

tjstork (137384) | more than 4 years ago | (#30894072)

The waterfall model is horrible for big projects. I thought everybody knew that and had switched to the spiral model a loong time ago.

The spiral model is utterly terrible. Since the DoD moved over to it, every one of their projects is over budget, underperforming, and late.

Agile isn't all that much better. The whole point of Agile is that you can have all of these changes... but you can get that with shorter release cycles, and its pretty easy to game Agile as much as any other model.

I think waterfall is probably still the best.

Re:Like Spiral Model is Good? (4, Interesting)

Kjella (173770) | more than 4 years ago | (#30896580)

Based on my experience so far I would say to do the technical structure with waterfall, and the functional structure with agile. What do I mean by that? Well, most of the time the customer doesn't really know what he wants, which is why blueprinting fails so miserably. But you can often at a technical level know what a customer wants. Let's for example say the customer wants drop down fields in an application. You know you'll need a storage backend (database?), UI front end (web app?), you need functions to manage the values, you need listing, sorting, filtering (single or multivalue?), security, audit logs and so on. You can design a ton of things by waterfall without actually knowing what drop downs the customer will want.

Agile promises to do that by refactoring which rarely happens because it's very likely to break things that were already working, despite the unit tests. They need the documentation from the original waterfall design, and they need the testing from the new waterfall design to ensure quality. One of the things I've noticed suffers most in agile is the documentation because there's an implicit belief that this will all change again, so people skimp on it even more than usual to document it when it's "final". The result is often that things are kludges made to extend things rather than actually going back to refactor, because people spent very little time thinking about a long term design in the first place.

Conversely, I have done quite a few implementation projects and in most the customer has only a list of specifications and no real idea how he'd like it to work. Creating a blueprint accurate enough that technical people could implement by and that the customer understands well enough what he's not going to say "well, that's not what I wanted" is like pulling teeth. And at the end of the day, different stakeholders will still have a different idea in their mind of what it's going to be. If you have a decent architecture, then you can do agile on top of that. Want this link to go there? Want to see these things? Can we get a checkbox there? Can you calculate that in a preview? Hopefully yes, but if it goes against the architecture it might need to go a longer waterfall process.

There's a balance here, on the one side you got expert systems that try to be ultraflexible in every direction but only ends up as an overcomplicated mess. On the other, you have the projects where nobody took five minutes to think "Am I trying to solve one special instance of a general issue here?". I've no idea if it'd only make a complete mess of two development methodologies, but I'd sure like to try it out sometime.

Re:Oh god, the still use Waterfall? (1)

PmanAce (1679902) | more than 4 years ago | (#30894310)

"Hell, the whole game industry uses it. (As far as I know.)" Well I can confirm that my work uses agile, and we have been voted in the top 5 best studios in the world according to Game Informer.

Re:Oh god, the still use Waterfall? (2, Insightful)

BitZtream (692029) | more than 4 years ago | (#30894862)

First sign a developer is shitty ...

He/She starts talking about 'which development model is better' and starts naming them.

Its the developer with the issue in your case, not the model.

There is no real Waterfall Model (1)

ClosedSource (238333) | more than 4 years ago | (#30894960)

"The waterfall model is horrible for big projects."

Given that the waterfall model was merely a straw-man, it's best not to use it for anything.

Re:There is no real Waterfall Model (0)

Anonymous Coward | more than 4 years ago | (#30895290)

"The waterfall model is horrible for big projects."

Given that the waterfall model was merely a straw-man, it's best not to use it for anything.

Unfortunately, people have been using it for a long time. Although, oddly enough, many have even used it successfully — or, at least, used a modified model that incorporates some of the so-called "agile" techniques, such as iteration, that have actually been around much longer than the "agile" label (or even the "waterfall" straw-man).

Re:Oh god, the still use Waterfall? (2, Funny)

Angst Badger (8636) | more than 4 years ago | (#30895230)

Oh, pfft. The thing that killed Duke Nukem Forever was the decision to implement it in Perl 6.

Re:Oh god, the still use Waterfall? (2, Insightful)

Doomdark (136619) | more than 4 years ago | (#30895586)

And now they add the only thing to it, that in even more horrible? Agile?? Or in other words: Spaghetti coding with the motto: “If perfect planning is impossible, maybe not planning at all will work.”

This is not meant as a flame, but I don't think you have a clue as to what Agile here means. Possibly because term has been abused a lot by people who just want to get rid of all processes -- nonetheless, agile does not mean "no process". Just a light-weight common-sense process that most mature developers would follow anyway.

It is also true that agile methodology is a meta thing ("abstract methodology"). So it is bit silly to argue about it, as opposed to concrete implementation thereof like Scrum. But I assume you were referring to class of methodologies, all of which allegedly would be just excuses of not thinking through anything. And that is a false statement.

Re:Oh god, the still use Waterfall? (0)

Anonymous Coward | more than 4 years ago | (#30898716)

"But I assume you were referring to class of methodologies, all of which allegedly would be just excuses of not thinking through anything. And that is a false statement."

Yessir. By your own reckon it remembers just too much to communism: the perfect political system if only we managed to implement it properly. Sorrily we can, and it doesn't help people saying "ah! but that wasn't *real* communism!"

Re:Oh god, the still use Waterfall? (2, Informative)

Alef (605149) | more than 4 years ago | (#30897556)

And now they add the only thing to it, that in even more horrible? Agile?? Or in other words: Spaghetti coding with the motto: “If perfect planning is impossible, maybe not planning at all will work.”

It is obvious that you have never worked with a properly implemented agile process.

First of all, spaghetti code is absolutely not accepted. High quality code is imperative to maintain a successful product in the long run, and something methodologies such a Scrum explicitly declare as non-negotiable. In fact, one of the main points of Scrum is to try to eliminate stakeholders' influence over the quality--time trade-off.

And secondly: Of course you do planning when you work with Agile! It's just that you don't stipulate what will be achieved by a certain dead-line -- instead you estimate. And this is the only sensible thing to do. You cannot be more efficient than 100%, no matter how much you need to. If things take longer, then they were harder than you thought (and hence you try to make a better estimate the next time). You can reduce the scope of the task, or you can put in more hours for a temporary boost, but the map has to change if it differs from reality.

If you with "planning" mean writing specifications, then no, you don't write as much specifications. But that doesn't mean that you do not write any specifications at all. Again, common sense dictates the rule. Specify what you need to, but don't try to specify things just for the sake of it. That is pointless at best and usually detrimental.

Re:Oh god, the still use Waterfall? (1)

turbidostato (878842) | more than 4 years ago | (#30898850)

"spaghetti code is absolutely not accepted"

Yeah, of course not. And then, failing at a sprint goals is absolutly not accepted either (not at least by managment, which is all that matters). Do you know what happens when those two "absolutes" collide?

"High quality code is imperative to maintain a successful product in the long run"

And doing what needs to be done to reach the short run goals is imperative to get this week's paycheck. Again, do you know what happens when those two "imperatives" collide?

"methodologies such a Scrum explicitly declare as non-negotiable."

Of course they do. Problem being that "quality" is not hard-definable while features are. "I want these bells by Friday" is easierly checkable than "but without degrading code quality".

"you don't stipulate what will be achieved by a certain dead-line -- instead you estimate."

Of course you estimate. It's your manager the one that stipulates.

"You can reduce the scope of the task, or you can put in more hours for a temporary boost"

Or you just can cut corners and cross your fingers.

All in all I don't think you are wrong: of course Agile can put into the table valuable ideas but the point is that the key success factor is, and always has been, not tools or metodologies but people; the higher they are the more critical for the project.

Take a good management team and you'll success with scrum, waterfall or whatever. Take your typical Dilbert-like management and you are doom no matter what.

"Waterfall" = "Toss over the fence"? (0)

Anonymous Coward | more than 4 years ago | (#30894086)

Many years ago, I worked at a financial software company as a product tester. They were first class with development. Development would release builds every two weeks while we tested them and put in fixes for the next build. After a few months the product was released with few if any bugs.

Then the company got bought by another company and because of the similar products, a bunch of us got booted out. Within a few weeks I started at a different company testing hospital nurse-call systems and person/asset tracking devices (IR badges that send out serial numbers to sensors so you can locate equipment or a person in a building). Development in this place was less than stellar. They used the "Toss it over the Fence" method of software development. The developers would do their coding and once they were happy with it would release it to Product Assurance and then move on to a different product to work on. Any problems found in testing would not get fixed because they were no longer working on that product. As a result, Product Managers would downgrade all found bugs as "minor" -- even critical show-stoppers. So basically the function of testing was to rubber-stamp the software before shipping it out.

So hopefully, "Waterfall" development doesn't mean "Toss it over the Fence".

We use the "Homer" Methodology... (2, Funny)

bodland (522967) | more than 4 years ago | (#30894402)

All deployments end with "Doh!" and are fixed and redeployed.

UCD (1)

Rui Lopes (599077) | more than 4 years ago | (#30894924)

I'm sorry, but *every* UI-centric application development model should follow any flavour of User Centred Design [wikipedia.org] .

My favorite model (1)

caywen (942955) | more than 4 years ago | (#30896590)

I much prefer the "someone just please code the damn thing" model.

Depends on the project (1)

caywen (942955) | more than 4 years ago | (#30896730)

I think a typical yet reasonable school of thought is that the best model depends on the characteristics of the project. Some projects are very fluid and some projects are very constrained. Designing the next cool iPhone game versus programming a perfect clone of last month's cool iPhone game.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?
or Connect with...

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>