Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Linus On Branching Practices

CmdrTaco posted more than 3 years ago | from the not-the-wood-kind dept.

Programming 90

rocket22 writes "Not long ago Linus Torvalds made some comments about issues the kernel maintainers were facing while applying the 'feature branch' pattern. They are using Git (which is very strong with branching and merging) but still need to take care of the branching basics to avoid ending up with unstable branches due to unstable starting points. While most likely your team doesn't face the same issues that kernel development does, and even if you're using a different DVCS like Mercurial, it's worth taking a look at the description of the problem and the clear solution to be followed in order to avoid repeating the same mistakes. The same basics can be applied to every version control system with good merge tracking, so let's avoid religious wars and focus on the technical details."

cancel ×

90 comments

Sorry! There are no comments related to the filter you selected.

RELIGIOUS? (1)

Anonymous Coward | more than 3 years ago | (#34390634)

[...] so let's avoid religious wars and focus on the technical details.

Challenge Accepted!

Re:RELIGIOUS? (4, Funny)

Anonymous Coward | more than 3 years ago | (#34390676)

Agreed. Except when it comes to mercurial which is the sux0rs.

Re:RELIGIOUS? (0)

Anonymous Coward | more than 3 years ago | (#34390720)

My VCS can beat up your VCS.

Re:RELIGIOUS? (1, Funny)

Anonymous Coward | more than 3 years ago | (#34391140)

My VCS can beat up your VCS.

Oh yeah? My VCS [wikipedia.org] has over 30 years of continuous history!

Re:RELIGIOUS? (1)

gstoddart (321705) | more than 3 years ago | (#34391206)

[...] so let's avoid religious wars and focus on the technical details.

Challenge Accepted!

You are incredulous that there is a possibility that choice of of version-control could lead to a holy war?

Man, you haven't developed much code then ... even CVS versus Source Safe can lead to fisticuffs. Don't even mention Subversion of Perforce unless you're ready for a bit of a row. Some of us are old enough to have used RCS in our home folders.

This is serious business, and everybody has a feature set they feel they will die without, and anything which doesn't do those things is crap. I think this might only be second to the Emacs or vi holy war .... of course, we all know vi is better, but I digress. ;-)

Re:RELIGIOUS? (1)

i_ate_god (899684) | more than 3 years ago | (#34392054)

Hence why it's a challenge to avoid religious wars over the issue?

I'm not sure who deserves the whoosh here, you or me.

Re:RELIGIOUS? (1)

gstoddart (321705) | more than 3 years ago | (#34392110)

I'm not sure who deserves the whoosh here, you or me.

Likely me. ;-)

Re:RELIGIOUS? (1)

GeniusDex (803759) | more than 3 years ago | (#34393536)

I find that neither emacs nor vi qualifies to be a proper editor; that simplifies discussions very much. More serious contestants for the title include nano and mcedit, but roughly any GUI editor wins from CLI editors if you ask me.

Re:RELIGIOUS? (1)

tibman (623933) | more than 3 years ago | (#34394100)

nano!

Re:RELIGIOUS? (1)

BeansBaxter (918704) | more than 3 years ago | (#34396534)

Nano is like being locked into a bathroom stall when the lights go out. Vim!

Re:RELIGIOUS? (0)

Anonymous Coward | more than 3 years ago | (#34442900)

When I log into my Xenix system with my 110 baud teletype, both vi *and* Emacs are just too damn slow. They print useless messages like, 'C-h for help' and '"foo" File is read only'. So I use the editor that doesn't waste my VALUABLE time.

Ed, man! !man ed

ED(1) UNIX Programmer's Manual ED(1)

NAME
          ed - text editor

SYNOPSIS
          ed [ - ] [ -x ] [ name ]
DESCRIPTION
          Ed is the standard text editor.
---

Computer Scientists love ed, not just because it comes first
alphabetically, but because it's the standard. Everyone else loves ed
because it's ED!

"Ed is the standard text editor."

And ed doesn't waste space on my Timex Sinclair. Just look:

-rwxr-xr-x 1 root 24 Oct 29 1929 /bin/ed
-rwxr-xr-t 4 root 1310720 Jan 1 1970 /usr/ucb/vi
-rwxr-xr-x 1 root 5.89824e37 Oct 22 1990 /usr/bin/emacs

Of course, on the system *I* administrate, vi is symlinked to ed.
Emacs has been replaced by a shell script which 1) Generates a syslog
message at level LOG_EMERG; 2) reduces the user's disk quota by 100K;
and 3) RUNS ED!!!!!!

"Ed is the standard text editor."

Let's look at a typical novice's session with the mighty ed:

golem> ed

?
help
?
?
?
quit
?
exit
?
bye
?
hello?
?
eat flaming death
?
^C
?
^C
?
^D
?

---
Note the consistent user interface and error reportage. Ed is
generous enough to flag errors, yet prudent enough not to overwhelm
the novice with verbosity.

"Ed is the standard text editor."

Ed, the greatest WYGIWYG editor of all.

ED IS THE TRUE PATH TO NIRVANA! ED HAS BEEN THE CHOICE OF EDUCATED AND IGNORANT ALIKE FOR CENTURIES! ED WILL NOT CORRUPT YOUR PRECIOUS BODILY FLUIDS!! ED IS THE STANDARD TEXT EDITOR! ED MAKES THE SUN SHINE AND THE BIRDS SING AND THE GRASS GREEN!!

When I use an editor, I don't want eight extra KILOBYTES of worthless
help screens and cursor positioning code! I just want an EDitor!!
Not a "viitor". Not a "emacsitor". Those aren't even WORDS!!!! ED!
ED! ED IS THE STANDARD!!!

TEXT EDITOR.

When IBM, in its ever-present omnipotence, needed to base their
"edlin" on a UNIX standard, did they mimic vi? No. Emacs? Surely
you jest. They chose the most karmic editor of all. The standard.

Ed is for those who can *remember* what they are working on. If you
are an idiot, you should use Emacs. If you are an Emacs, you should
not be vi. If you use ED, you are on THE PATH TO REDEMPTION. THE
SO-CALLED "VISUAL" EDITORS HAVE BEEN PLACED HERE BY ED TO TEMPT THE
FAITHLESS. DO NOT GIVE IN!!! THE MIGHTY ED HAS SPOKEN!!!

?

Yeah (3, Funny)

truthsearch (249536) | more than 3 years ago | (#34390696)

so let's avoid religious wars and focus on the technical details

Hahahaha... Good one!

Re:Yeah (5, Funny)

Monkeedude1212 (1560403) | more than 3 years ago | (#34390834)

I know. Linus is such a Linux Fanboy. Its so obvious.

Re:Yeah (4, Funny)

Anonymous Coward | more than 3 years ago | (#34391330)

him and his blanket

Re:Yeah (1)

wrench turner (725017) | more than 3 years ago | (#34392416)

I thought Linus was referring to SCM religions, Git v. BitKeeper

Re:Yeah (0)

Anonymous Coward | more than 3 years ago | (#34392996)

I thought Linus was referring to SCM religions, Git v. BitKeeper

Whoosh!

Re:Yeah (0)

Anonymous Coward | more than 3 years ago | (#34403256)

It's not his fault, his parents were Linux Fanboys too, they even named him after Linux!

Re:Yeah (0)

Anonymous Coward | more than 2 years ago | (#34423718)

Where are the technical details? Basically summary "Branch from known stable baseline"... duh.

This all sounds complicated (4, Insightful)

Anrego (830717) | more than 3 years ago | (#34390716)

Which I imagine makes sense, as the kernel is very complicated from a dev standpoint.

For most projects I’ve been involved with, the path to success is keeping the trunk in a stable state, and using _that_ as the baseline. Dev code should never be in the trunk imo... the trunk should always be in a ready to release (or proceed to formal testing, or whatever) state. Everyone branches from the trunk.. everyone can update their branch to the latest trunk.. and everyone merges back down into the trunk when it’s good and ready.

Resisting the temptation to make “quick fixes” in the trunk is also important. Additionally, dev platforms should be setup so the system can be run from any branch as easily as the trunk (making it a pain to test out the system from a branch is a great way to ensure unstable code ends up in your trunk).

Obviously in the case of the kernel.. they probably have branches off branches off branches, but I think for most reasonably sized projects, that shouldn’t be necessary.

Re:This all sounds complicated (3, Insightful)

assemblyronin (1719578) | more than 3 years ago | (#34390934)

I think you actually restated the point that Linus made in the original thread. Which was: Don't branch and start new development from an unknown state.

For you, the stable baseline is equal to the trunk. For Linus, the stable baseline is equal the labeled release build node.

Re:This all sounds complicated (0)

Anonymous Coward | more than 3 years ago | (#34391056)

Wouldn't merging several branches back to the trunk lead to an essentially untested trunk?

Re:This all sounds complicated (2, Insightful)

Anonymous Coward | more than 3 years ago | (#34391110)

You merge several branches together into a "integration" branch, then test that and merge it to the trunk if it passes.

Re:This all sounds complicated (0)

Anonymous Coward | more than 3 years ago | (#34391194)

That's what I figured what had to be done. Thanks.

Re:This all sounds complicated (1)

c++0xFF (1758032) | more than 3 years ago | (#34392726)

And don't branch off the integration branch for exactly the same reasons stated in the article.

The problem comes when stable releases are too infrequent. Changes start requiring features and fixes from other changes waiting to go to the trunk, and a programmer is tempted to branch from the integration branch to pull those in.

A better solution is to branch from a stable point on the trunk and merge the needed changes into that branch. This makes it clear exactly what dependencies exist and the required changes follow along automatically. If a new stable release comes along in the meantime, just merge the trunk back up to your branch (as you'd probably do anyway, right?).

Re:This all sounds complicated (1)

gorzek (647352) | more than 3 years ago | (#34393324)

In a past life I had a fairly straightforward strategy:

1. The trunk was always the next enhancement release--assumed unstable at all times.
2. When we were ready to test that release, we would branch the trunk.
3. From that point forward, new development would be done on the trunk again and only fixes would go into the release branch.
4. Once we had the release branch stable enough to release, we would tag it and cut a release.
5. A release branch would wind up with one or more "maintenance releases," so each time we wanted to have one of those we'd make a branch off of the release branch. Fixes not destined for the current maintenance release would go on the upstream branch to be part of the next one.
6. All fixes were merged upstream using a custom merge tool.

Clients were able to cherry-pick what patches they wanted, too, and we would merge those fixes into a custom branch (based on their installed release) and cut a patch release. We had a system that tracked dependencies so the defect rate was quite low with this process. If a given fix had too many dependencies we'd force the customer to take the next maintenance release.

At any given time, we maintained an enhancement branch (trunk), at least two major release branches, and then we usually supported the last two maintenance releases on each major release branch.

Re:This all sounds complicated (1)

Anrego (830717) | more than 3 years ago | (#34392126)

The approach I tend to like is:

- merge the trunk back up into the branch
- do your "pre trunk commit" testing in the branch
- merge branch down into trunk

If things do get crazy, you can create an "integration branch" .. but I think that can generally be avoided

Re:This all sounds complicated (4, Informative)

gstoddart (321705) | more than 3 years ago | (#34391472)

For most projects I’ve been involved with, the path to success is keeping the trunk in a stable state, and using _that_ as the baseline. Dev code should never be in the trunk imo... the trunk should always be in a ready to release (or proceed to formal testing, or whatever) state. Everyone branches from the trunk.. everyone can update their branch to the latest trunk.. and everyone merges back down into the trunk when it’s good and ready.

He's also saying that everybody should branch from the exact same point along the branch or trunk. That way everybody has a set of diffs against the same baseline to merge back in.

If you always branch from trunk, then as more stuff gets added, you start from a different point than you might otherwise.

The specifically labeled "point in time" means that three separate changes can more readily be integrated as they'll be all from the exact same baseline.

If the trunk is ready for formal testing, and it affects your other branches, you have a harder time if you fix things and need to push them back into those branches.

Re:This all sounds complicated (4, Interesting)

MtHuurne (602934) | more than 3 years ago | (#34391928)

I think that the development process should be selected to match the particular project and the stage it is in. There is no perfect process that applies to every project, or even to one project forever. A team of 4 in a single room working on a demo for a new product idea will have very different requirements from a team of 20 working in two locations on an improved version of a product that is already in production...

There are two conflicting goals: to avoid breaking the main branch (trunk) and to get changes out to the other developers soon. A broken main branch wastes the time of other developers on the project. But integrating changes late has its own inefficiencies: Problems in the modifications will only be raised after the work is done. It is more likely for one set of modifications to conflict with another set if both are being developed in parallel for a longer time. Other developers might have to wait for a full set of changes to arrive while they only need a subset, or they might start merging the subset from each other's development branches, creating a confusing mix of versions.

Committing directly into trunk can be acceptable and even desirable depending on the project. It depends on how likely commits are to break the code: How many developers are there? How many mistakes do they make? (a combination of experience and carefulness) Is there decent test coverage before committing? How fragile is the code base; are there many unexpected side effects? And it depends on how much damage a broken main branch does: How long does it typically take to find and fix a problem? How modular is the code base: will a bug in one part be a nuisance to developers working on another part? And it also depends on how much there is to gain from early merging: Is the project in the start-up phase where it is likely that other developers are waiting for new core functionality, or is the code base mature and are most changes done on the edges of the program? Are all design decisions made before code is written or are developers doing design and implementation work at the same time?

Re:This all sounds complicated (3, Interesting)

i_ate_god (899684) | more than 3 years ago | (#34392092)

I go in the reverse.

Trunk is dev, branches are stable. We haven't had much trouble with this set up at all.

Re:This all sounds complicated (3, Interesting)

Anonymous Coward | more than 3 years ago | (#34392908)

We did this where I worked previously too. It was also the MO for the artists building the artwork for the game.

Your trunk is the "Main line", a boiling pot of all the changes and can change on a minute by minute bases right near crunch time; This is good because you fail early if your change is not compatible with other changes instead of at the end of the day or whatever. This is very important for artwork.

The last known good build is tagged/labelled (or branch if you prefer) and was generated by an auto build process that ran tests or the lead/QA department.

Also a admin user (project manager, lead developer etc) could lock the whole project with exceptions for themselves or other specified users (automated build machine user etc). Do a build and test cycle and then mark it as known good release.

The source control system for the artists also allowed changes to be kept separate from the "Main Line" but kept in source control; this is like a branch if you will, but with the difference that you could allow other users or custom defined groups access to your WIP changes. This in effect is like been able to get the latest of the trunk and changes from other users branches at once.

Very useful when a group of artists, 3D modeller, texturer, animator are working on a new area as they can refine it without checking in none working bits to the main line everyone else is working on.

Re:This all sounds complicated (1)

Ash Vince (602485) | more than 3 years ago | (#34403144)

Trunk is dev, branches are stable. We haven't had much trouble with this set up at all.

I think the problem is that this does not really work when you have people all over the world trying to do different things on a huge base of code over different timescales. If everyone is working towards a single release date then this makes more sense as you can implement things like feature freezes.

In Kernel development though there can be people working on a refinement that will take a very long period of time so there will be several releases that go by in between. There will also be be people starting there development at random times with no real knowledge of what else is going on the Trunk.

The single Trunk for dev works well in companies where everyone sings from the same hymn sheet but is not so good for distributed development being contributed to from different developers at different times on different projects.

No single company would ever structure development like this as it is woefully inefficient in terms of man power but it does have other advantages since it is closer to an evolutionary process. Consider Linus as natural selection in action :)

Re:This all sounds complicated (1)

mgiuca (1040724) | more than 3 years ago | (#34412910)

That's the model that ended up causing the "Source" engine to be named so.

From what I heard, near the end of the development of Half-Life 1, Valve had their "src" directory (their mainline) and wanted to make some more radical engine improvements. These improvements were too last-minute to make it into the game, so they created a branch for the "gold" version of Half-Life, called "goldsrc", which was only used to commit stable code and polish the game, and the experimental changes were committed to the trunk "src".

Hence, the Half-Life 1 engine ended up being informally known as the "GoldSrc" engine, while the main "src" trunk went on to be officially named the "Source" engine.

Re:This all sounds complicated (3, Informative)

kbielefe (606566) | more than 3 years ago | (#34392420)

I hate to break it to you, but even if your trunk is clean, you will still have this problem in some other branch. Let's examine a very common situation where you have an interface being changed, one or more implementations of that interface, and one or more users of that interface. Developers are working simultaneously on both sides of that interface in order to meet a deadline.

Because of your clean trunk rule, none of the changes can be checked into the trunk until all of the changes are ready, but they still need to be shared among the people working on it, or they will have no idea if it is "good and ready." So those developers create their own branch, which of necessity is sometimes in a temporarily broken state. You might not think of it as a branch, if it's John's working directory and the "checkout" procedure is him emailing files around, but it's conceptually a branch nonetheless.

Linus is simply acknowledging that temporary brokenness is inevitable when multiple people integrate changes to the same code, and therefore whatever branch contains that messy integration should use tags to communicate the best branch points. I'm not saying keeping a clean trunk isn't a good idea, just that you have to deal with broken branch points one way or another, even if it's just John deciding when the best time is to email out the new header files to his team.

comment from original page (1)

martas (1439879) | more than 3 years ago | (#34390730)

Or you do what we do: keep trunk pristine - no development should EVER occur in your trunk and it should always be possible to push a stable release from it.
Seems reasonable to me... Don't know why this wouldn't solve the problem, or any other reason why it's not desirable.

Re:comment from original page (4, Informative)

Americano (920576) | more than 3 years ago | (#34391244)

Yep, this is standard practice if your scm support knows what they're doing. The only reason it's not "desirable" to only branch off of stable, 'known-good' baselines is developer laziness. It can take more time setting up the branch, and sometimes that quick checkout-edit-checkin on the trunk is just SOOOO tempting as a shortcut. I see this a lot in groups working on new products, too - "it's never been released to production, so we'll just branch from wherever, and call it a day." Usually they grow out of this type of practice after they spend a few days untangling a mess they've created, but there are some die-hards who just hate having to deal with anybody else, and insist on doing their own thing.

This is why it's important to have:
1) Management / leadership that understands the value of proper configuration management, and expects good practices to be used;
2) Support for your SCM system that knows how to set up these practices and is empowered to enforce them;
3) Mature developers who understand that "fastest" isn't always "best";

(Full disclosure: part of my role in my current job involves clearcase admin, and i've also worked with svn, cvs, pvcs, and (shudder) vss in varying capacities)

Re:comment from original page (3, Insightful)

gstoddart (321705) | more than 3 years ago | (#34391620)

Usually they grow out of this type of practice after they spend a few days untangling a mess they've created, but there are some die-hards who just hate having to deal with anybody else, and insist on doing their own thing.

And those people get smacked on the knuckles with a ruler. If they keep failing to abide by your policies, you smack 'em on the ass with the ruler. If they keep going like that, you get rid of them.

There are very few things more destructive to a development team than some prima donna who won't follow the rules and procedures. In the long run, if they won't play by the rules laid down, they'll do more harm than good.

Source Code Management and "cowboys" can't really coexist if you want to be able to have maintainable software. I've seen someone who would apply changes to any old branch and more or less decree it was someone else's problem to get them onto main -- buh bye, if you're sabotaging the build process, we don't need you.

Re:comment from original page (3, Insightful)

Americano (920576) | more than 3 years ago | (#34391876)

I agree, and if the choice were mine, there are some people I work with who would be pink-slipped immediately... but, politics at a large-ish company being what they are, it's a matter of demonstrating to managers that the actions are counter-productive and costing us time and money... then letting them draw the proper conclusions. In a well-run meritocracy, these people would be gone for violating the "No Asshole" rule.

The problem is, some of the managers are over-promoted cowboys themselves - I've heard, no exaggeration, the following from a manager when I was arguing for locking down one of our production systems because people kept making changes live: "I know it's good policy, but as soon as policy slows down my developers, the policy goes out the window."

The technical problems are easy. It's this political maneuvering that requires the patience of a saint.

Re:comment from original page (4, Insightful)

gstoddart (321705) | more than 3 years ago | (#34391982)

I've heard, no exaggeration, the following from a manager when I was arguing for locking down one of our production systems because people kept making changes live: "I know it's good policy, but as soon as policy slows down my developers, the policy goes out the window."

Run. Run fast, run far.

If managers are going to support the notion of un-tracked changes on a production server in the name of getting things done, then eventually someone will be looking to lay blame for something that went horribly wrong.

Failure to understand why people have change procedures for live systems is pretty significant. And, depending on your industry ... un-tracked fixes and tweaks can actually get you in legal trouble. Think Sarbanes-Oxley.

In almost any sane shop, failure to follow the change procedures can be a grounds for immediate dismissal.

Re:comment from original page (1)

ishobo (160209) | more than 3 years ago | (#34396104)

In almost any sane shop, failure to follow the change procedures can be a grounds for immediate dismissal.

Most companies are not sane. I once worked for a forex provider where policies were not followed and never enforced by management. It was a reactive and chaotic environment, where engineering had direct access to production and build/release was responsible for production operations versus an actual operations/mis team. I blame this squarely on the youth culture in the IT world, were discipline is rare. I actually had a developer (senior platform architect at age 28) complain about the number of branches, when I wanted to bring a 4th online. In the early 90s when I did development at Lotus, I had to work with 21 branches for all the various projects I was involved with and never broke a sweat.

Re:comment from original page (0)

Anonymous Coward | more than 3 years ago | (#34399442)

I'm all in favour of locking down production systems, but this manager has a point with "as soon as policy slows down my developers, the policy goes out the window."

My company has way too much policy and inefficient development processes. It does slow down developers to the tune of taking 3x or 4x as long to make many changes. Maybe large projects are not affected to that extent, but the cost of the overhead is inversely proportional to the scale of the work. As a result, small problems with the system do not get fixed and cause problems into the distant future.

Furthermore, I have seen no evidence that these additional controls which have been implemented over time have reduced faults or improved quality. I'm fairly sure that quality has gone down because bad design decisions and corners cut to get the release out are now set in stone.

There's a reason that startups have a reputation of being responsive and quick to implement and improve systems, and large companies of being slow, unresponsive, and almost impossible to get things fixed.

Re:comment from original page (1)

wurp (51446) | more than 3 years ago | (#34395238)

s/Source Code Management/Software Configuration Management/g

Re:comment from original page (1)

wrench turner (725017) | more than 3 years ago | (#34392482)

Shouldn't your continuous integration's regression suite assure that mainline is virtually stable? Why isn't the call for better regression tests?

Re:comment from original page (2, Insightful)

Americano (920576) | more than 3 years ago | (#34392750)

I'm sorry, how does "automated testing of the main line via a CI tool after the changes are committed to the main line" assure that your main line stays stable?

"Virtually stable" is not "stable". When you work for a financial services firm whose livelihood depends on the market data and trading systems your team builds, "virtually stable" is nowhere near "stable" and doesn't even begin to approach "good enough".

Re:comment from original page (2, Insightful)

wrench turner (725017) | more than 3 years ago | (#34393226)

It's not the CI tool that assures that mainline is stable; it's the quality of the regressions.

Re:comment from original page (1)

ebuck (585470) | more than 3 years ago | (#34393888)

I'm sorry, how does "automated testing of the main line via a CI tool after the changes are committed to the main line" assure that your main line stays stable?

"Virtually stable" is not "stable". When you work for a financial services firm whose livelihood depends on the market data and trading systems your team builds, "virtually stable" is nowhere near "stable" and doesn't even begin to approach "good enough".

How are you going to know that the mainline is stable unless you are going to test it?

How are you going to ensure the testing was complete and consistently applied unless you're going to automate it?

You NEED CI to assure the testing is done as part of the process on every mainline check-in. You NEED it because it runs the suites of tests which PROVE the main line is stable.

If your test suite doesn't prove the mainline is stable, then it's a fault of the test suite, not the CI / build / automated testing systems.

Re:comment from original page (1)

c++0xFF (1758032) | more than 3 years ago | (#34393830)

Yep, this is standard practice if your scm support knows what they're doing.

And I have yet to see it done right in practice. Especially with ClearCase. Every config spec I've seen includes /main/LATEST or similar, instead of working off of labels.

The closest I've seen was with Subversion, where the policy was to only branch off of the tags directory. But even then most people just worked off of trunk. It was a mess.

If you're using Subversion, the right way for a project of any decently large size is to only work off of a branch and only branch off a tag.

Re:comment from original page (2, Informative)

Americano (920576) | more than 3 years ago | (#34395044)

You can do it right with a 4-line config spec. The config spec needs to include that /main/LATEST clause at the bottom because new elements being added to the branch aren't labeled with the baseline you're branching from.

The config spec should take the form of:

element * CHECKEDOUT
element * .../branch/LATEST
element * BASELINE -mkbranch branch
element * /main/LATEST -mkbranch branch

The only time the /main/LATEST rule will ever be evaluated is if an element is added to the branch after the BASELINE is applied, and even then, it will force development out to the branch.

Re:comment from original page (1)

Grapes4Buddha (32825) | more than 3 years ago | (#34396600)

If you want to be strict about it, you could always replace the /main/LATEST rule with a /main/0 rule, like so:

element * CHECKEDOUT
element * .../branch/LATEST
element * BASELINE -mkbranch branch
element * /main/0 -mkbranch branch

Re:comment from original page (1)

Americano (920576) | more than 3 years ago | (#34405020)

If you're labeling properly, that'll have more or less the same effect - the only time you should be 'falling through' to the /main/X clause is if it's a new element added after label "BASELINE" was applied, and if that's the case, then /main/0 should be /main/LATEST unless you're adopting this config spec halfway through a dev cycle and scared you'll pick up things you don't intend to.

Re:comment from original page (1)

Grapes4Buddha (32825) | more than 3 years ago | (#34405380)

Agreed, that's why I said "If you want to be strict about it".

Let's say you had a uncaught labelling error on a source file (foo.c) when you create your baseline label. I know, you're checking for that, but just for the sake of argument, let's say it happened. With a /main/LATEST catch-all rule, you would be getting whatever version of foo.c happens to be /main/LATEST. When you run a build, you may or may not get an error, but you run the risk of introducing a bizarre bug in your build image, which could lead to many hours of debugging before you track it down to that source file being the wrong version. With a /main/0 catch-all rule, you will most likely get a compile error (since foo.c will be empty). Maybe not when foo.o gets generated, but rather at link time. Basically, you're better set up to catch the problem sooner.

But yes, you are correct that it should be the same. Using /main/LATEST can also be nice if you have areas that are not labeled and you want to get /main/LATEST for those other ares.

Re:comment from original page (0)

Anonymous Coward | more than 3 years ago | (#34391364)

I was about to point it out, but saw you'd already done so.

The only thing I'll add is that the trunk can easily be kept clean by maintaining the Trunk, a group of Shared branches, and the usual individual user branches. The Shared is a branch off the Trunk where all major development for a given group happens. There should be as few of these as possible. Each copy of Shared represents development of a particular portion of the Trunk. In other words, there is a 'mouse driver updates' Shared, which is all stable Trunk except the input drivers section, which is being developed by one group, and a 'video codecs' Shared, which reflects only the Trunk and the changes to the video codecs. It is expected that NO merges occur between the Trunk and Shared unless the contents of Shared are stable and tested. This leads to a three-tiered system: personal- code in progress, Shared- at least compiles, should have undergone basic testing, Trunk- kid tested, mother approved.

The downside is that this can lead to divergence if the users don't take care- Shared branches should be created based on natural divisions in the code, and really, a Shared branch should only be able to be created by an admin of some sort (even though this kind of user permissions aren't supported on version control systems AFAIK). As long as the interfaces between Shared divisions don't change, then the merges stay relatively painless.

Re:comment from original page (1)

AigariusDebian (721386) | more than 3 years ago | (#34391580)

Sure, you can do that if you have 5-10 commits per day. However, Linus merges on *average* around 100 change sets to the Linux kernel trunk every day and has been doing that for a long time now. You can not expect to keep both that speed and also keep the trunk 100% stable all the time.

Creating feature trunks from known-to-be-stable points is a much easier approach for everyone involved.

it's simply ignorance! (2, Interesting)

bogolisk (18818) | more than 3 years ago | (#34392294)

The kernel devs don't do development on master! However, git's fast-forward-merge will, by default, push development/intermediate commits onto master. Those intermediate commits are extremely useful for code-inspection/code-review and bisect-based debugging. They're are not meant for starting a new dev branch and that why they're not tagged! There's nothing new or interesting in that article other than a bunch stupid comments at the bottom. The whole thing smells like a disguised advertising for PlasticSCM

Re:comment from original page (1)

leuk_he (194174) | more than 3 years ago | (#34395602)

I must admit i do not often work with multiple branches, but one problem i see here is that patches are kept very long in a branch, because some developers do not like it to be called stable. As soon as it is stable they should stop tinkering and get assigned something else OR other people will see their bugs in stable.... and they just as well could keep in in the branch.

Developers use tools with out thought. News at 11 (1)

OzPeter (195038) | more than 3 years ago | (#34390860)

Thats what it amounts to. Developers are happily branching and branching off branches with no concern of whether what they branched off was stable in the first place.

Branch-then-merge is NASTY (1, Insightful)

Anonymous Coward | more than 3 years ago | (#34390876)

Even the best merge tools can't guarantee the logic of the merged code is correct no matter how stable/good your branch point is.

Re:Branch-then-merge is NASTY (1)

dkleinsc (563838) | more than 3 years ago | (#34391382)

No single tool can guarantee the logic of any code is correct. Your only options in that department are mathematical proof, unit tests, integration tests, and field tests.

In other words, scm, like all other incredibly useful tools, do not constitute a silver bullet in software development.

Re:Branch-then-merge is NASTY (1)

drdrgivemethenews (1525877) | more than 3 years ago | (#34391930)

Not always. It's quite useful if your group is doing potentially destabilizing new work and you still need to keep up with bug fixes in a release branch. What's the alternative, a gazillion "#ifdef MY_NEW_FEATURE"s, twiddled makefiles, etc?

And what is a company with multiple shipping releases of the same code supposed to do? Good merge tools with good engineers using them are their only hope. When a bug is discovered in 5.0 and it needs to be fixed in 5.1, 5.2 and 6.0 three merges are necessary, no matter what SCM tool you're using.

ClearCase solved these problems years ago (-1, Troll)

Anonymous Coward | more than 3 years ago | (#34390974)

These discussions are laughable. ClearCase identified and solved all these problems in the commercial world long before Free software. The problem is that the sort of people who do kernel work don't work for the sort of companies that can afford the ClearCase licensing fees - or at least *mostly* don't.

Re:ClearCase solved these problems years ago (2, Interesting)

assemblyronin (1719578) | more than 3 years ago | (#34391058)

Never underestimate the stupidity of some people. I've seen some VOBS get royally hosed and take a day or two to go through the version-trees of individual elements to untangle their merge history. This was all due to two things: 1) OzPeter's Point [slashdot.org] 2) Lazy CM that didn't want to provide simple scripts and lock down a standard method for view/config-spec management.

Re:ClearCase solved these problems years ago (1)

Americano (920576) | more than 3 years ago | (#34391292)

+a million, spot on.

One of the guys I worked with at my first job working with ClearCase put it thus: "ClearCase is great, it's powerful and flexible. And it gives you plenty of rope to hang yourself with."

Re:ClearCase solved these problems years ago (1, Interesting)

rjstanford (69735) | more than 3 years ago | (#34391240)

ClearCase identified and solved all these problems in the commercial world long before Free software. The problem is that the sort of people who do kernel work don't work for the sort of companies that can afford the ClearCase licensing fees - or at least *mostly* don't.

Well, I'll go with that if you expand the definition to include those who's companies went with ClearCase and who learned first hand how much of a royal PITA it is.

It remains that for many smaller organizations (or teams), something boring but 100% predictable and almost maintenance free like SVN gets the job done just fine. But its not sexy/expensive...

Re:ClearCase solved these problems years ago (1)

ishobo (160209) | more than 3 years ago | (#34396222)

...but 100% predictable and almost maintenance free like SVN...

SVN is neither.

Re:ClearCase solved these problems years ago (1)

rjstanford (69735) | more than 2 years ago | (#34417296)

...but 100% predictable and almost maintenance free like SVN...

SVN is neither.

Ooh, tres trendy. In reality, SVN has powered a staggeringly large number of software projects, as did CVS before it. In the same way that 'ant' (or heck, even 'make') remains remarkably effective for many projects. When you look at 10-500 dev years in a project, your source code control and build maintenance costs can often be in the 10-20 hour range (over the project lifetime). Try that with ClearCase and its brethren. And yes, I've worked on several projects in the 3-10 million LOC range that did just fine with 'make'. Decent templates make almost anything just work - if you have a dedicated 'build expert' or 'sccs maintainer', or those hours/dollars actually show up as something other than noise, then you're doing it wrong.

The reality is that most software projects aren't dramatically unique. They really aren't. Reinventing the wheel is almost always silly, using the 'old' boring solution almost always works and lets you concentrate on the actual innovative software you're designing, rather than having a great innovative code control / translation layer / build system and software that never quite gets shipped.

Huh what? (0)

Anonymous Coward | more than 3 years ago | (#34391018)

OP needs to go back and correct his post so that it reads in the form of a question, please. I don't understand a bit of it.

Isn't this kind of obvious? (2, Insightful)

syousef (465911) | more than 3 years ago | (#34391128)

The whole story seems to be summed up by: "Don't just branch from some random point. Wait until your code is stable and branch from that." and "Create these stable points from which you can branch as often as is practical". I'm sorry but I've never been tempted to branch from an unstable point, and I'd be horrified if anyone on my time tried to do so.

As for only adding features to a stable release I find that depends on the size, complexity and maturity of the project. Early on nothing is feature complete and everyone tends to work on an unstable head/trunk/master/whatever-your-scm-calls-it. Once development has settled down and there's been a release, it's much more controlled and people do tend to add their code from a stable point.

I'm sorry I just don't see any life changing revelations here.

Re:Isn't this kind of obvious? (1)

JohnFluxx (413620) | more than 3 years ago | (#34391604)

> I'm sorry but I've never been tempted to branch from an unstable point, and I'd be horrified if anyone on my time tried to do so.

What?

This would happen just checking out the latest version, then start writing some code. How is that a practise that you should be "horrified" by ?

Re:Isn't this kind of obvious? (1)

syousef (465911) | more than 3 years ago | (#34395190)

> I'm sorry but I've never been tempted to branch from an unstable point, and I'd be horrified if anyone on my time tried to do so.

What?

This would happen just checking out the latest version, then start writing some code. How is that a practise that you should be "horrified" by ?

What on earth are you talking about? No, you do not automatically branch every time you check in or check out code. I think you need to go back and read up on the concept of a branch.

Re:Isn't this kind of obvious? (1)

JohnFluxx (413620) | more than 3 years ago | (#34396978)

And you need to read up on the concept of git.

Seriously, this isn't SVN.

Typical work flow in git is:

1) Clone remote repository
2) Make branch of the clone's master (origin/master), say, and calling it "master".
3) Add your commits to your branch.
4) Continue until you're happy.
5) Merge your changes with any changes that other people have done.
6) Push your changes onto the remote server.

What Linus is saying is that step 5 can cause trouble. Instead of making a branch of clone's master, you should use the latest tag instead. This is not the way you'd do it on any small or medium project.

Re:Isn't this kind of obvious? (1)

syousef (465911) | more than 3 years ago | (#34398336)

And you need to read up on the concept of git.

Seriously, this isn't SVN.

I've used git exactly once, and that was last weekend to pull down Spring Security source. So no argument I need more time in git to comment on git specific process. But did you miss the article summary stating that this should be discussed regardless of the source control flavour?

Typical work flow in git is...

What Linus is saying is that step 5 can cause trouble. Instead of making a branch of clone's master, you should use the latest tag instead. This is not the way you'd do it on any small or medium project.

Therein lies the rub. It all depends on the size of the project, number of contributors, maturity of the code, stability of master. Where I work, you don't just check in random non-working changes. Our trunk (yes we work with SVN) is always stable, except in the very early stages of project development.

Merging your changes will always cause trouble if the changes are incompatible/conflict. It doesn't matter what point you merge from. Is there some git specific problem or bug that I don't know about here?? If you can't count on your team communicating and co-ordinating on the master/trunk/head, then yeah working from a known stable point and creating patches from that makes perfect sense. But again this isn't news, and it isn't git specific.

Re:Isn't this kind of obvious? (1)

Tacvek (948259) | more than 3 years ago | (#34397506)

Unless you are using a VCS that does mandatory per file locking, then you are conceptually branching every time you check out code, since before you check the code back in, somebody else could come along and check in other changes.

Granted that most version control systems don't label a working directory as a branch. The only real difference is that a working directory does not have a series of commits while a ranch does. Of course, even that is not much of a difference, since good practice would be to examine your working directory' changes and manually break it up into a series of commits as required.

Re:Isn't this kind of obvious? (1)

syousef (465911) | more than 3 years ago | (#34398368)

Unless you are using a VCS that does mandatory per file locking, then you are conceptually branching every time you check out code, since before you check the code back in, somebody else could come along and check in other changes.

Sorry, but no. "Conceptually" branches and tags have very specific properties that aren't fulfilled by checking out. A key feature being that you can go back to a precise version of the whole code base which you have labelled (hopefully clearly).

Re:Isn't this kind of obvious? (1)

Tacvek (948259) | more than 3 years ago | (#34425686)

That depends very much on the VCS. Many modern VCS's have the property that any completely checkout has a distinct revision identifier, meaning that you can always go back the the same version of the entire codebase. Furthermore, there are Version control systems out there that do not permit whole repository branching, but do support per file or per directory (non-recursive) branching. Those are rather esoteric systems, but they do exist.

Re:Isn't this kind of obvious? (1)

iluvcapra (782887) | more than 3 years ago | (#34391852)

I don't know, I never really put much thought into branch policy, namely, what exactly does the "master" branch do and is it necessarily always a safe place to base new revisions? Practice seems to vary.

Also this seems to bring into question how you recognize "good" commits in the system versus "in progress" ones. If I were adding a new feature to source in a repository, would I necessarily start from a release tag, and then merge commits from master since then to get everything that changed since the release? Or do I not merge any commits subsequent to the release tag, or is there a branch in the tree the maintainers keep for known-good bases for new branches?

Some of this could be cured with continuous integration, where you put hooks into your SCM to run unit tests on your tree before it commits, and the SCM will refuse the commit if they fail (you would only apply this policy on certain branches or repositories).

Anyways, slightly enlightening.

Re:Isn't this kind of obvious? (0)

Anonymous Coward | more than 3 years ago | (#34392252)

+1

Linus is mastering the obvious here and I don't understand why this matters and how come this is news

Anybody working with more that 2 branches follows the same "practice"... and in most of the cases without such obnoxiousness

Re:Isn't this kind of obvious? (0)

Anonymous Coward | more than 3 years ago | (#34392440)

I'm sorry but I've never been tempted to branch from an unstable point, and I'd be horrified if anyone on my time tried to do so.

s/unstable/random/

It's not that people purposefully choose an unstable point, it's just that one day you decide you need a branch and so you do it. But you weren't aware of a few recent commits introduced some bugs which you've now inherited.

Should have linked to the actual article (5, Informative)

Shandalar (1152907) | more than 3 years ago | (#34391162)

Here is the actual article that the submitter should have linked to. [lkml.org] It's Linus's post. Instead, the submitter linked to his or her advert site, which is a blog that has ads which hawk their own, non-git source control system, all of which you get to read before you are given the link to Linus's actual post.

Re:Should have linked to the actual article (0)

Anonymous Coward | more than 3 years ago | (#34391932)

Thank you. I see this stupid gd plasticscm site all the time on Slashdot.

Re:Should have linked to the actual article (0)

Anonymous Coward | more than 3 years ago | (#34394186)

They, plastic scm people, regularly publish an article that looks like something worth reading but actually they are trying to make some free publicity. This is not their first appeareance on slashdot.

Can we also use slashdot to promote our products or is it only allowed for them?

Heisenberg as applied to SW development (4, Insightful)

vlm (69642) | more than 3 years ago | (#34391220)

Some devs know where STABLE is located, some devs know what direction their new code is going, and a successful merge is where a dev violates the Heisenberg Uncertainty Principle and accomplish both at the same time.

branch/merge is sux (1)

toxonix (1793960) | more than 3 years ago | (#34391256)

Branching and merging is a huge waste of time. I'm not sure why continuous integration is considered just a delaying of the problem. With a CI/streaming model merging happens continuously, not all at one time while trying to hit a moving target. And if CI is not allowed to break, no one has any problems downstream.

Re:branch/merge is sux (2, Interesting)

Mordstrom (1285984) | more than 3 years ago | (#34391914)

Do you really want to do all the validation testing every time you put back to the trunk? Including Installation testing? There are only so many scenarios you can catch with test-first development. The rest are usually discovered by the testers. That's why they still have jobs and why you can actually accomplish anything in a CI environment. There will always be the heroic tales of development teams that are on version 11000 on the trunk and have never busted a customer. Maybe you are him/her? >.>

For the rest of us simpletons, we prefer a validated release to start baselines from.

If you employ a fully featured VCS like ClearCase (or others) you can even run multiple simultaneous baselines releasing feature content while hardening your previous releases...and then merge between them. 0.0 Yes, good SCM teams let you do this and protects you from the worst of your merge nightmares.

Re:branch/merge is sux (1)

Mordstrom (1285984) | more than 3 years ago | (#34391966)

BAH! No edit button !

Yes, good SCM teams let you do this and protect you from the worst of your merge nightmares.

Re:branch/merge is sux (1)

toxonix (1793960) | more than 3 years ago | (#34397142)

There are different modes of release. CI makes sense where I work because we are on very short continuous release cycles. We don't have 6 months to engineer something that will be sent to Jupiter without the capability for updates. I'm suggesting streaming rather than using branches.

slashdot on advertisements (0)

Anonymous Coward | more than 3 years ago | (#34391290)

this is the second "story" that's really a blog/ad for PlasticSCM. coincidence or kickback?

AwesomKe Fp (-1, Troll)

Anonymous Coward | more than 3 years ago | (#34391478)

Anonymous Coward (1, Funny)

Anonymous Coward | more than 3 years ago | (#34391720)

From TFA:

"Vaselines are key"

I totally agree.

Well, duh (0)

Anonymous Coward | more than 3 years ago | (#34392148)

I don't know why this blog entry quoting linus's entry on the mailing list was considered worthy of an article. What linus said is (or at least should) be obvious to anyone who has a bit of experience participating in any software project which has adopted a revision control system. In fact, surely everyone who ever used a revision control system already stumbled on the dreaded "shooting moving targets" problem on their first contributions to the project and surely they already managed to learn from their mistakes. You only need to reverse a single commit to learn right there and then not to branch from unclean sources. So why exactly is this news?

The original email complained about failed bisect (2, Interesting)

Chirs (87576) | more than 3 years ago | (#34392512)

git allows you to bisect from known-good and known-bad kernels to try and find the source of the problem. The original complaint was that some of the intermediate changes don't build.

The problem here is not necessarily branching/merging, but that maintainers and developers do something along the line of "commit bad change, notice problem, commit fix" in their own private branch. Then, rather than clean up their private branch that whole history gets merged into the main kernel tree.

This has the advantage of showing more details of development, but has the downside that a bisect that hits the "bad change" commit won't build and will require some manual action to select a "nearby" commit that will build.

As I view it, it's less about rebase/merge and more about developers/maintainers being more diligent about keeping their trees clean before merging back to the mainline.

Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>