You make a good point, but just to be devil's advocate, neither of those are true for Python, where an average is:
(In 2.x, there is an integer division error there, but I'm talking 3.x).
You say that you need the backwards compatibility, but you never say why? Why does it make a difference that you are running old code in an old interpreter rather than the new one? Why force new code to be the same as that old code? It's the same thing, only you get the choice to advance.
It's not impossible to upgrade code, and it's always possible to run it in an old (but supported) interpreter. There is no merit to keeping the language at a standstill.
There is a huge difference there - the kernel isn't like Python, you can't run two disparate versions on the same system to run older code.
Programming language have to be able to make occasional breaking changes like this, because otherwise we settle for crap languages with all the problems that can't be fixed. You know what happens then? People make more languages to fill the gap, and then rather than just making a few changes, you either stick with the old bad version (just as you could have before), or switch to a new language which will require a complete rewrite rather than just modification (which, in Python's case, is often as simple as pushing it through 2to3).
If you really can't afford to have Python change, continue to use 2.x - guess what, it's still (as evidenced by this post) being supported, and will be for some time yet.
1. Why bother? The language can support it natively, so why not just do that?
2. It should never reach that point - it sounds like your code is convoluted and poorly laid out. If the cost of the function call is actually affecting your program, then the code you are talking about is hugely performance-sensitive, and should probably be offloaded into an extension module.
"If value corrupts then absolute value corrupts absolutely."