Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Firefox Mozilla Programming Software

How Maintainable Is the Firefox Codebase? 127

An anonymous reader writes "A report released this morning looks at the maintainability level of the Firefox codebase through five measures of architectural complexity. It finds that 11% of files in Firefox are highly interconnected, a value that went up significantly following version 3.0 and that making a change to a randomly selected file can, on average, directly impact eight files and indirectly impact over 1,400 files. All the data is made available and the report comes with an interactive Web-based exploratory tool." The complexity exploration tool is pretty neat.
This discussion has been archived. No new comments can be posted.

How Maintainable Is the Firefox Codebase?

Comments Filter:
  • by xxxJonBoyxxx ( 565205 ) on Wednesday May 15, 2013 @12:52PM (#43732995)

    >> A number of modules, namely, accessible, browser and security, frequently appear among the most complex modules. Further investigation may be helpful in identifying why that is the case.

    Does this guy know what Firefox is?

    • Maybe he's referring to the Mozilla codebase as a whole rather than just Firefox in itself?
    • Re: (Score:2, Funny)

      by Anonymous Coward

      A virtual machine running an operating system that by accident happens to have a (rather mediocre) browser? Just like Chrome?

      All that was missing was the relabeling. I guess that's done with "FirefoxOS" and "ChromeOS". ;)

      Now all we need, is to port Linux to it. ... Oh wait! [jslinux.org]

    • What is a Firefox? A miserable little pile of sources.

      But enough code... Have at you!

      • What is a Firefox? A miserable little pile of sources.

        Really? I was under the impression that, traditionally, a Firefox was a miserable little pile of memory leaks. Although these days, that doesn't quite seem to be the case as much as it used to be.

  • So? (Score:5, Insightful)

    by ShanghaiBill ( 739463 ) * on Wednesday May 15, 2013 @12:56PM (#43733033)

    It finds that 11% of files in Firefox are highly interconnected

    Figures like this would be more useful if they were put in context. What is a "normal" level for connectedness? What is the level for the Linux kernel, or for GCC? Compared to other similar sized projects, is 11% good or bad?

    • Re:So? (Score:5, Insightful)

      by hedwards ( 940851 ) on Wednesday May 15, 2013 @12:59PM (#43733075)

      Normal probably isn't so useful here, but it would give some context. 11% of files being highly interconnected could be a sign of incompetence on the part of the developers, or it could be a sign that they're engaged in sound design by splitting off commonly used methods into their own files and treating them as libraries.

      I'd suspect that the latter is the case here.

      • Re:So? (Score:4, Informative)

        by steelfood ( 895457 ) on Wednesday May 15, 2013 @01:28PM (#43733383)

        It's crucial to know the distribution of that 11%. If they were all located in one area, it might be as you say, But if the 11% was comprised of a few files in each major module, then that'd be bad.

        You want the bulk of your program doing the actual work to look like a tree. You don't want the bulk of your program to look like a mesh (graph). This is especially true of your core components.

    • It finds that 11% of files in Firefox are highly interconnected

      It means 89% aren't...which sounds much nicer.

    • Re: (Score:3, Insightful)

      by Anonymous Coward

      Not only that. The number is entirely meaningless, if we don't know *how* they are interconnected.

      If they are interconnected through a well-defined and stable interface, they can be connected as much as they want... it doesn't matter!

      What counts is the *modularity*! How much can you treat everything as independent modules? How much can you change the *implementation* without causing trouble.

      Because if they are cleanly separated that way, they are no different from being completely separate projects or prog

    • There are lots of measures of a code's "maintainability", with interconnectedness being just one of them.

      More to the point, that's what code tests are for: to make sure changing one thing doesn't break another. Talking about the "health" of the code base without knowing about test coverage or effectiveness is pretty damned meaningless, regardless of "interconnectedness". My view is that Ali Almossawi's paper is therefore a waste of dead trees.
    • by sootman ( 158191 )

      Good, obviously. It goes to 11!

  • Got a trojan warning (Score:5, Informative)

    by dasapfe ( 856009 ) on Wednesday May 15, 2013 @12:58PM (#43733067)
  • by Aighearach ( 97333 ) on Wednesday May 15, 2013 @01:03PM (#43733131)

    and tomorrow we'll count the lines of code and spew more meh

  • is it a coincidence that 3.0 is when they started versioning up like crazy every two weeks? I think not!

  • by 0x000000 ( 841725 ) on Wednesday May 15, 2013 @01:21PM (#43733307)

    I am wondering how this stacks up to a project like WebKit/Blink, as well as seeing that project against the original KHTML. Sure it is a renderer/HTML layout/JavaScript engine only, and won't contain the browser chrome like Firefox, but I think it would be interesting to look at.

    Many people have also suggested that WebKit is easier to embed into various different environments (more so than Gecko) and that it has been able to evolve faster mainly due to the code base being cleaner, and I wonder if this holds true when looking at it from a complexity standpoint, or is it more complex but simply laid out better and in a way that is easier to understand?

    • Webkit may be better documented. I tried to find documentation on Gecko and it was very limited, I gave up on it.

  • by revealingheart ( 1213834 ) on Wednesday May 15, 2013 @01:38PM (#43733489)

    According to recent comments [mozillazine.org] (continued on the next day's thread), the win32 compiler that Mozilla use is approaching the 4GB limit, after which LibXUL (which Firefox depends upon) will no longer compile.

    It's currently at 3.5GB, and at the current rate, will reach the limit in approximately 6 months: Chart of memory usage of LibXUL during last 90 days [mozilla.org]

    While I think that Servo will produce a more decentralised design than Gecko and XUL, the memory limit will be reached well before that. With Windows XP support ending next year, Mozilla should consider migrating to x64 as soon as reasonably possible, keeping x32, but focusing on stripping large and extraneous code above new features.

    • Sounds like a good point to refactor
    • Reminds me of http://developers.slashdot.org/story/11/12/14/1725205/firefox-too-big-to-link-on-32-bit-windows [slashdot.org]... As one commenter in that thread asked, haven't they switched to x64 compilers yet? (Apparently there are issues getting the x86 version to compile properly on x64.)
      • I think something is seriously wrong when a program becomes too large to be compiled.
        • by Anonymous Coward

          to be linked not compiled... if you use link time optimization, the memory usage grows fast

    • Switching a few command line options to the compiler would completely resolve the issue. The downside is that it'll take longer to compile.

      Realistically though, if your hitting those limits, the compiler isn't the problem, the code is.

      Bloat much?

      • It has to do with the compiler optimizations that profile code more than it has to do with code bloat.

    • I don't get it. Couldn't you just use a 64-bit compiler and set it to produce 32-bit binaries?
    • by HoserHead ( 599 ) on Wednesday May 15, 2013 @05:22PM (#43735359)

      I'm a Firefox developer.

      This is slightly inaccurate. We aren't running out of memory to link Firefox, we're running out of memory to run Profile-Guided Optimization (PGO) on Firefox.

      PGO looks at what is actually executed during a given workload and optimizes based on that. It can be a pretty big win — 30% in some workloads —so we work pretty hard to keep it going.

      Unfortunately, PGO needs to have not only all the code, but all the intermediate representations and other metadata about the code in memory at one time. (That's why we're running out of memory.)

      Unfortunately, MSVC doesn't support producing a 32-bit binary using their 64-bit compiler.

      (FWIW, Chrome has *always* been too big to use PGO.)

      • Why not switch to compiling it with something that can cross compile, running the compiler on 64 bits. Perhaps GCC or Clang could do the job?

      • by Xest ( 935314 )

        "(FWIW, Chrome has *always* been too big to use PGO.)"

        Chrome is also more performant too though, so what do they do instead? Is it simply better architected from the outset?

    • Its the optimization phase that takes up a lot of RAM. Id rather they skip the optimization phase rather than damage features and functionality. If firefox wont render pages well, i would give up on it.

  • by DrStrangluv ( 1923412 ) on Wednesday May 15, 2013 @01:48PM (#43733613)

    ... that have no meaning at all.

    Impacting 8 files on average would be horrible... for a project with 8 files. But how many is that relative to the size of Firefox?

    11% of files in Firefox are highly interconnected... but how does that compare to other projects of similar scope?

    The one value in that summary that had any meaning at all was the comment that the percentage of interconnected files "went up significantly following version 3.0". That at least has some relative measure we can use as a base.

  • Firefox is forthing me to update the flash player, even if *I know* that I only visit one web site with flash on it with it.
    In other words: it refuses to work. (That means I have to shut down all other browsers with roughly 100 tabs/windows, unacceptable!)
    Since ... don't know which version? You can not disable autoupdate anymore. Unfortunately the developers believe they may change look and feel arbitrarily.
    I for my part use an very old firefox for one specific web site. Otherwise I use Chrome and Safari an

  • I gave up on Firefox around version 1.4 when it became clear it was no longer anything like the lean, mean lightweight browser I was looking for, and once was it's apparent target. The bloat factor caused it to become irrelevant after that.
    • by Anonymous Coward

      Bullshit. If it were irrelevant you wouldn't have clicked on every last FF article to profess its irrelevance. The truth is that it's so relevant that you feel threatened enough by it to leave your mark everywhere.

      P.S. You forgot to tick "Anonymous Coward"

    • Firefox isnt meant to be a super lightweight browser but one that can render anything. A lightweight browser would be an unuseable browser for normal people. If you want a lightweight browser try Dillo. The UI code and the rendering code probably consumes very little at runtime compared to all of the image data that is being stored in memory, anyway.

  • by Animats ( 122034 ) on Wednesday May 15, 2013 @02:46PM (#43734141) Homepage

    It's a real problem. The Firefox dev team gave up on running add-ons in a separate process (the "electrolysis" project) because the code base was too single-thread oriented. Remember, some of the code dates back to Netscape. There's talk of reviving that project now, [internetnews.com] but it's mostly talk and meetings.)

    Refitting concurrency tends to be very hard and the result tends to be ugly. You get something like Windows 3.x or MacOS 6/7, where easy things are complicated for the wrong reasons.

    • There are clearly people working on Mozilla who 'get it', but the project management seems adverse to tackling anything long-term or difficult. They also like to officially deny that real problems exist. Just as a random sample of tech geeks, how long were people here bitching about Firefox memory usage while we were fed a line of excuses. I recall reading the blog of the lone dev who started what came to be known as the Memshrink project, which, when it was done, management loved to tout, but before tha

      • by Anonymous Coward

        FYI, Electrolysis is back on.

        https://wiki.mozilla.org/Electrolysis#Yes_these_dates_are_real_and_from_2013.21

    • The Firefox dev team gave up on running add-ons in a separate process (the "electrolysis" project) because the code base was too single-thread oriented. Remember, some of the code dates back to Netscape

      I thought the conventional wisdom was that the Mozilla team made a mistake, unnecessarily losing time, by starting over from scratch. In other words there is not enough Netscape code in Firefox!

  • Oh the history - the netscape codebase being so complex a complete re-write was necessary...
    • Quite correct. After 4.7 (circa IE5) they chucked it and Mozilla was born after several more years of re-development.

      I've no idea how bad the old code base was but I'd imagine based on past experience they would have done a cleaner job this time around.
      • by jensend ( 71114 )

        Well, things would be cleaner to re-implement this time around if they had to do another rewrite, because cross-platform development is now a basically solved problem.

        In 1998, getting one codebase that would work on things like various ancient Unices, "DOS-based" Windows (95/98/Me), and Mac OS 8/9 was a very difficult task. Beyond the lower-level concerns, few good libraries would work across all targets. C++ itself was a mess when trying to work across different systems and compilers- many things could not

      • Was that really why a new browser architecture was created. It may have been simply due to the fact the old rendering engine did not structurally support document reflow well, without creating a huge second systems effect anyway, which would lead to obfuscation problems. The old rendering engine may have been fine for what it was designed to do.

  • by Anonymous Coward

    The conclusions of the research are very positive and shed a very good light on the health of the code. Why is everyone commenting like the conclusions are the opposite?

  • Well if the fate of Camino with Gecko is anything to go by: not very.

"If it ain't broke, don't fix it." - Bert Lantz

Working...