Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Comments

top

Apple's Next Hit Could Be a Microsoft Surface Pro Clone

Coryoth Re:"Perfectly timed"? (247 comments)

Seems to me that Apple is playing catch-up in the phablet arena. Apple was late to the party and lost the toehold because of its tardiness.

No, no, you're looking at this all wrong. Apple stayed out of the Phablet market until they were "cool/hip/trendy". The vast sales Samsung had were merely to unimportant people. Apple, on the other hand, entered the market exactly when phablets became cool, because, by definition, phablets became cool only once Apple had entered the market.

2 days ago
top

Science Has a Sexual Assault Problem

Coryoth Re:I FIND THIS HIGHLY... (460 comments)

Logic is a binary function. Something is in a logical set - or it is not. Being illogical is not a synonym for being mistaken. Degrees of precision are irrelevant for set inclusion. Fuzzy logic is not logic.

Fuzzy logic is logic. So are linear logic, intuitionistic logic, temporal logic, modal logic, and categorical logic. Just because you only learned Boolean logic doesn't mean there aren't well developed consistent logics beyond that. In practice bivalent logics are the exceptions.

about a month ago
top

KDE's UI To Bend Toward Simplicity

Coryoth Re:Some criticism (184 comments)

... a lot of people respond to this by saying the criticisms are stupid, that "if you know what you're doing" then you'll understand what's really going on, etc.

Yes; "if you're just willing to get your hands a little dirty and muck in and learn then you can bend the hugely complicated interface to your needs" they'll say; they'll complain that your just not willing to learn things, and thus it is your fault. Such people will inevitably state that they are "power users" who need ultimate configurability and are (unlike you) willing to learn what they need to to get that.

They will inevitably deride GNOME3 for it's complete lack of configurability. Of course they'll gloss over the fact that GNOME3 actually exposes pretty much everything via a javascript interface and makes adding/changing/extending functionality via javascript extensions trivial (GNOME3 even has a javascript console to let you do such things interactively). Apparently actually learning an API and coding completely custom interfacdes from myriad building blocks is "too much work". They are "power users" who require a pointy-clicky interface to actually configure anything. Even dconf is "too complicated".

For those of us who learned to "customize our desktop" back in the days of FVWM via scriptable config files calling perl scripts etc. it seems clear that "power users" are really just posers who want to play at being "super-customised". Almost all the modern DEs do have complete customisation available and accessible; some of them just use a richer (scripting) interface to get such things done.

about 1 month ago
top

Interviews: Ask Bjarne Stroustrup About Programming and C++

Coryoth Re:Is the complexity of C++ a practical joke? (427 comments)

It's not the features that you stare at with no idea what they do that cause a problem. As you say, a quick look at the manual can help to sort that out (though it does add to the overall cognitive load). It's all the potentially subtle things that you don't even realise are features and so never look up and don't realise that, contrary to first inspection, the code is actually doing something subtly different to what you expect.

about 2 months ago
top

Math, Programming, and Language Learning

Coryoth Re:I disagree (241 comments)

Math is all about being precise, logical.. Communicating exactly one concept at a time. Natural languages do neither.

Except math is almost never actually done that way in practice. Euclid was wonderful, but almost all modern math does not work that strictly (and Euclid really should have been more careful with the parallel postulate -- there's "more than one thing at a time" involved there). Yes, proofs are careful and detailed, but so is, say, technical writing in English. Except for a few cases (check out metamath.org, or Homotopy Type Theory) almost no-one actually pedantically lays out all the formal steps introducing "only one concept at a time".

about 3 months ago
top

Math, Programming, and Language Learning

Coryoth Re: Your Results Will Vary (241 comments)

Not every programmer deals with these [mathematical] questions regularly (which is why I donâ(TM)t think math is necessary to be a programmer), but if you want to be a great programmer you had better bet youâ(TM)ll need it.

I don't think you need math even to be a great programmer. I do think a lot of great programmers are people who think in mathematical terms and thus benefit from mathematics. But I also believe you can be a great programmer and not be the sort of person who thinks in those terms. I expect the latter is harder, but then I'm a mathematician so I'm more than read to accept that I have some bias in this topic.

about 3 months ago
top

Math, Programming, and Language Learning

Coryoth Re:I disagree (241 comments)

Math IS sequencing. So is using recipes. That is how math works.

Math is a language. Just because you can frame things in that language doesn't mean that that language is necessary. Recipes are often in English. English is sequencing (words are a serial stream after all). That doesn't mean English is necessary for programming (there seem to many competent non-english speaking programmers as far as I can tell).

Disclaimer: I am a professional research mathematician; I do understand math just fine.

about 3 months ago
top

Math, Programming, and Language Learning

Coryoth Re: Your Results Will Vary (241 comments)

College education wastes countless hours teaching academic stuff that a great majority of programmers will not use on the job, while neglecting critical skills that could be immediately useful in a large .[sic]

Of course there was a time when college education was supposed to be education and not just vocational training.

about 3 months ago
top

Math, Programming, and Language Learning

Coryoth Re:Your Results Will Vary (241 comments)

I think part of the problem is that "programming" is itself so diverse.

The other part of the problem is that math is so diverse. There's calculus and engineering math with all kinds of techniques for solving this or that PDE; there's set theoretic foundations; there's graph theory and design theory and combinatorics and a slew of other discrete math topics; there's topology and metric spaces and various abstractions for continuity; there's linear algebra and all the finer points of matrices and matrix decompositions and tensors and on into Hilbert spaces and other infinite dimensional things; there's category theory and stacks and topos theory and other esoterica of abstraction. On and on, and all very different and I can't even pretend to have anything but cursory knowledge of most of them ... and I have a Ph.D. in math and work for a research institute trying to stay abreast of a decent range of topics. The people who actually study these topics in depth are all called "mathematicians", but if you're an algebraic geometer then sure, you're probably familiar with category theory and homological algebra; if you do design theory and graph theory then those seem like the most useful subject available.

about 3 months ago
top

Math, Programming, and Language Learning

Coryoth Re: Your Results Will Vary (241 comments)

Calculus is perhaps not the best measure however. Depending on where you go in the programming field calculus is likely less useful than some decent depth of knowledge in graph theory, abstract algebra, category theory, or combinatorics and optimization. I imagine a number of people would chime in with statistics, but to do statistics right you need calculus (which is an example of one of the directions where calculus can be useful for programming).

Of course the reality is that you don't need any of those subjects. Those subjects can, however, be very useful to you as a programmer. So yes you can certainly be a programmer, and even a very successful and productive one without any knowledge of calculus, or graph theory say. On the other hand, there may well be times when graph theory, or calculus, or statistics could prove very useful. what it comes down to is whether you are inclined to think that way -- and if so it can be a benefit; if not it won't be the way you think about the problem anyway.

about 3 months ago
top

Ask Slashdot: Best Rapid Development Language To Learn Today?

Coryoth Re:I agree Python (466 comments)

I've gotten a lot of mileage out of Python for cleaning and pre-processing CSV and JSON datasets, using the obviously named "csv" and "json" modules. ... However, if you are doing very much manipulation of tabular data, I'd recommend learning a bit of SQL too.

You may want to look into pandas as a middle ground. It's great for sucking in tabular or csv data and then applying statistical analysis tools to it. It has a native "dataframe" object which is similar to database tables, and has efficient merge, join, and groupby semantics. If you have a ton of data then a database and SQL is the right answer, but for a decent range of use cases in between pandas is extremely powerful and effective.

about 4 months ago
top

Ask Slashdot: Best Rapid Development Language To Learn Today?

Coryoth Re:Programming language in 2 hours ? Yeah, right. (466 comments)

Because Ruby is my preference and I am more familiar with it, I can tell you that it is in continuous development, and bytecode-compiled versions are available (JRuby, which uses the JVM, and others). I do not know about Python in this respect because I haven't used it nearly as much.

Python has the default implementation CPython which compiles python to an interpreted bytecode; there's also Jython which compiles to JVM, and IronPython which compiles Microsoft's CLR. There's also Cython (which requires extra annotations) which compiles to C and thence to machine code, and numba which does compilation to LLVM. Finally there's Pypy which is a python JIT compiler/interpreter written in a restricted subset of Python.

about 4 months ago
top

Algorithm Distinguishes Memes From Ordinary Information

Coryoth Re:worthless top five phrases (38 comments)

So they mined the journal for words and phrases... meh, those aren't memes

They are memes in the sense that they are specifically finding words and phrases that are frequently inherited by papers (where "descendant" is determined by citation links), and rarely appear spontaneously (i.e. without appearing in any of the papers cites by a paper). An important feature is that their method used zero linguistic information, didn't bother with pruning out stopwords, or indeed, do any preprocessing other than simple tokenisation by whitespace and punctuation. Managing to come out with nouns and complex phrases under such conditions is actually very impressive. You should try actually reading the paper.

about 6 months ago
top

Algorithm Distinguishes Memes From Ordinary Information

Coryoth Re:now if only people can stop calling netmemes me (38 comments)

But the writers of TFA are still misusing the word

Actually no, they are not. By using citations to create a directed graph of papers they are specifically looking for words or phrases that are highly likely to be inherited by descendant documents and also much less frequently spontaneously appear in documents (i.e. not used in any of the cited documents). They really are interested in the heritability of words and phrases.

about 6 months ago
top

This Whole Bitcoin Thing Could Be Big, Says Bank of America

Coryoth Re:We should all like this Bitcoin *concept* (276 comments)

You are solely focused on bitcoin as an investment opportunity rather than its intrinsic utility.

Sure, but as far as intrinsic utility is concerned it doesn't matter when I get involved with bitcoin ... well, in fact it does: right now the price instability and general uncertainty mean it is far better to not get involved, wait for all this nonsense to sort itself out and join the game once everything is settled, stable, and bitcoin is actually being used purely for its intrinsic utility. In other words, it's better for me to ignore it for a few more years at least.

about 10 months ago
top

Study Suggests Link Between Dread Pirate Roberts and Satoshi Nakamoto

Coryoth Re:A link between DPR and an early Bitcoiner (172 comments)

I think the more interesting part is the fact that we have some decent mathematicians (in this case Adi Shamir among others) are setting about pulling the entire bitcoin transaction graph and doing some serious data-mining on it. The reported result sounds like a mildly interesting result that happened to pop up in the first pass.

Given the advanced tools available these days for graph mining (largely developed for social network analysis among other things) I suspect some rather more interesting results may start coming out soon. What may seem hard to track on an individual basis may fall somewhat more easily to powerful analysis tools that get to make use of the big picture. I bet there's some interesting info on cliques and exchanges that could be teased out by serious researchers with some decent compute power at their disposal. Pseudonymity may be even weaker than you might think.

about a year ago
top

195K Bitcoin Transaction

Coryoth Re:a skeptic says "wow bitcoin is serious ". Hope (167 comments)

Try pricing in Zimbabwean dollars - you'll see the same problem.

Well, you won't anymore because the Zimbabwe dollars were discontinued and the country now uses US dollars as its currency because price volatility made continued use of Zimbabwe dollars as a currency effectively impossible.

Now Zimbabwe had inflation not deflation, but the issue of volatility is the same: it makes things ultimately unworkable if it gets too high (even if it moves in a predictable way). When prices change significantly* by the minute and transactions take several minutes to complete then trouble may set in.

* significantly here means, say, double digit percentage change in price every minute. Bit coin is a long way from that currently, but is headed in that direction.

about a year ago
top

Stephen Wolfram Developing New Programming Language

Coryoth Re:yet another programming language (168 comments)

Being primarily a mathematician and not a computer scientist or engineer I've used Maple, Mathematica, Matlab, Magma and R. I've also programmed in Python, Perl, C, and Java and dabbled in things like Lisp and Haskell.

All the "math" programs on that list are terrible programming languages; they work great as interactive environments for doing (potentially symbolic) computation, but writing code in them? Ugh. If I actually have to write scientific computing code it's going to be in Python using numpy and sympy, or C if I need performance.

All the different math programs all have their strengths and weaknesses: Matlab kicks the crap out of the other for anything numerical or linear algebra related, both for ease of expression and performance; R has far more capabilities statistically than any of the others -- data frames as a fundamental data type make that clear; Magma is incomparable for the breadth and power of its algebra, none of the other come remotely close; Mathematica and Maple are ... well, sort of a poor jack of all trades that do most things but none of it very well.

about a year ago
top

Middle-Click Paste? Not For Long

Coryoth Re:Remove CTRL + C as well (729 comments)

Especially in an environment like Gnome 3 where the preferred method of working is full screen apps. Drag and drop to what?

I'm not really sure full screen is "the preferred method" in gnome 3 (I use gnome 3 and never full screen apps). Anyway, presuming you want to drag and drop you can drag to the Activities corner which will take you to the expose style overview from which you can select any window and drop into it. I've never done this until just now to see if it works and it does and is quite smooth (hover over a window for a second to have it restore as the front window if you want to drop to a particular location within the window).

1 year,26 days
top

Middle-Click Paste? Not For Long

Coryoth Re:FUCK OFF (729 comments)

I try 'desktops' from time to time but they don't really give me much beyond managing windows. you know, the thing that fvwm does well enough and with 1/10 the memory and cpu.

A lot of 'desktops' these days are things you don't see immediately; the toolkits, internationalization/localization, canvases, setting centralization and management, advanced font handling, notification plumbing etc. that most GUI applications make use of these days (from one desktop or another). Presuming you're using apps other than xterm (and perhaps you are not) you are actually making use of most of this stuff; the part of the `desktop`you`re not using is simply the window manager and the panels which are, ultimately, the tip of the iceberg.

1 year,26 days

Submissions

top

The Myth of the Mathematics Gender Gap

Coryoth Coryoth writes  |  more than 5 years ago

Coryoth (254751) writes "The widely held belief that there is disparity in the innate mathematical abilities of men and women has been steadily whittled down in recent years. The gender gap in basic mathematics skills closed some time ago, and recently the gap in high school mathematics has closed up as well, with as many girls as boys now taking high school calculus. As discussed in Newsweek, a new study published in The Proceedings of the National Academy of Sciences begins to lay to rest the remaining argument that it is at the highest levels of mathematics that the innate differences show. Certainly men dominate current academia, with 70% of mathematics Ph.D.s going to men; however that figure is down from 95% in the 1950s. Indeed, while there remain gaps in achievement between the genders, the study shows that not only are these gaps closing, but the size of the gap varies over differing cultures and correlates with the general degree of gender inequality in the culture (as defined by WEF measures). In all this amounts to strong evidence that the differences in outcomes in mathematics between the genders is driven by sociocultural factors rather than innate differences in ability."
top

Math Bounty for Elementary Teachers

Coryoth Coryoth writes  |  more than 6 years ago

Coryoth writes "The UK will be offering a bounty of up to £8000 for elementary school teachers willing to complete a Masters degree in mathematics. The cash rewards are part of a major new program to try and reform mathematics education in the UK, including the aim of having at least one specialist mathematics teacher in every elementary school in the country. The program wants to change the poor attitudes toward mathematics by providing informed mathematics instruction as early as possible, and by getting parents more involved."
top

Paying Primary School Teachers to Learn Maths

Coryoth Coryoth writes  |  more than 6 years ago

Coryoth writes "The UK government is planning to pay primary school teachers to get a Masters degree in mathematics. Teachers will be paid £1000 per year to attend week long university summer courses, with the goal of earning credits toward a Masters degree (which comes with a lump sum payment of £2500). This is part of a general overhaul of mathematics education in the UK based on a recent government report. The goal is to have at least one qualified specialist mathematics teacher in every primary school in the UK. Will this have a significant impact on how well UK students perform in mathematics, or is it a waste of money?"
top

Maths Experts for UK Primary Schools

Coryoth Coryoth writes  |  more than 6 years ago

Coryoth writes "BBC news is reporting that, following the reccomendations of a recent report, the UK government will be seeking 13,000 specialist math teachers; one for every primary school in the country. The aim is to change the culture of disinterest and distaste for mathematics that pervades many primary schools — when children are first learning about and developing attitudes toward the subject. Teachers will be paid to attend university summer courses in mathematics, with the goal of working towards a master's degree. The plan also includes efforts to get parents more involved, including joint mathematics lessons for parents and children. A similar approach starting in 1996 worked well for Finland. Could we see such a program spreading the the US, Australia and Canada?"
Link to Original Source
top

Specialist Math Teachers for UK Primary Schools

Coryoth Coryoth writes  |  more than 6 years ago

Coryoth writes "After a Government review of mathematics education in the UK, it has been decided that every primary school will have a specialist mathematics teacher within five years. Specialist mathematics teachers for primary schools will be paid more and will be expected to work towards masters degrees during their career. The goal is to interject a higher degree of math expertise and awareness much earlier in children's education. A similar approach was taken as part of Finland's LUMA programme in 1996, which has since yielded positive results. If the UK programme is successful, can we expect other countries to adopt this approach?"
top

Have Mathematics Exams Become Easier?

Coryoth Coryoth writes  |  more than 6 years ago

Coryoth writes "The BBC is reporting on a recent study in the UK that found that the difficulty of high school level math exams has declined. The study looked at mathematics from 1951 through to the present and found that, after remaining roughly constant through the 1970s and 1980s, the difficulty of high school math exams dropped precipitously starting in the early 1990s. A comparison of exams is provided in the appendix of the study. Are other countries, such as the US, noticing a similar decline in mthematics standards?"
Link to Original Source
top

Discouraging Students from Taking Math

Coryoth Coryoth writes  |  more than 7 years ago

Coryoth writes "Following on from a previous story about UK schools encouraging students to drop mathematics, an article in The Age accuses Australian schools of much the same. The claim is that Australian schools are actively discouraging students from taking upper level math courses to boost their academic results on school league tables. How widespread is this phenomenon? Are schools taking similar measures in the US and Canada?"
top

The Uneasy Relationship of Math and Crypto

Coryoth Coryoth writes  |  more than 7 years ago

Coryoth writes "Neal Koblitz, the creator of Elliptic Curve Cryptography, has an interesting article on the history of sophisticated mathematics in cryptography. Math has always played a key role in cryptography, but in the last few decades far more sophisticated pure mathematics has become increasingly vital to cryptography. This, in turn, has brought about an uneasy relationship between pure mathematicians and cryptographers. The result has been fruitful, but has also had a few unpleasant side effects; the article spends some time discussing and dissecting, and debunking the claims of "provably secure" cryptosystems."
top

The Uneasy Relationship Between Math and Crypto

Coryoth Coryoth writes  |  more than 7 years ago

Coryoth writes "From the September Notices of the AMS, The Uneasy Relationship Between Mathematics and Cryptography (PDF) by Neal Koblitz provides fascinating reading on the tension between pure mathematics and cryptography. While mathematics has always played some role in cryptography, since the 1970s far more sophisticated mathematics has played increasingly large role in cryptography research. The last couple of decades has seen a uneasy relationship develop between advanced mathematics and cryptography. Koblitz discusses this, along with his development of elliptic curve cryptography during that time. He also talks about some of the less welcome side effects, such as pure mathematicians contorting their research proposals to be "applicable" to "hot" fields of cryptography, and the attempt by cryptographers to co-opt the reliability of mathematical proof to give (rather weak, and often false) claims of "provably secure" cryptosystems."
Link to Original Source
top

Lenovo follows Dell with pre-installed Linux

Coryoth Coryoth writes  |  more than 7 years ago

Coryoth writes "The BBC is reporting that Lenovo is following Dell in offering pre-installed linux for its hardware. Lenovo will be offering Thnikpads with Novell's SUSE Linux Enterprise Desktop pre-installed. In contrast to Dell's largely home and general consumer focus for their linux foray, Lenovo is clearly targetting business users with the laptops models offered. As the BBC article points out, while desktop linux is largely the domain of "technology specialists" the moves by major hardware manufacturers such as Dell and Lenovo point toward a growing wider acceptance. Certainly dektop linux is increasingly gaining recognition amongst the broader computer using community."
top

Want to be a computer scientist? Forget maths

Coryoth Coryoth writes  |  more than 7 years ago

Coryoth writes "A new book is trying to claim that computer science is better off without maths. The author claims that early computing pioneers such as Von Neumann and Alan Turing imposed their pure mathematics background on the field, and that this has hobbled computer science ever since. He rejects the idea of algorithms as a good way to think about software. Can you really do computer science well without mathematics? And would you want to?"
Link to Original Source
top

EiffelStudio 6.0 Released

Coryoth Coryoth writes  |  more than 7 years ago

Coryoth writes "The first major version release since EiffelStudio went GPL in 2006, EiffelStudio 6.0, the Eiffel IDE, libraries and compiler suite, is now available for download with versions for Windows, MacOS X, and Linux. As the first major open source release, it cleans up some of the quirks such as "pick and drop", and modernizing the look and feel of the IDE. The result is a comprehensive development environment, with tools to cover the entire development process, from architectural design through to debugging and profiling. It also includes an updated version of the easy to use Vision2 cross platform GUI development libraries."
top

Coryoth Coryoth writes  |  more than 7 years ago

Coryoth writes "The BBC is reporting that students in the UK are being encouraged to drop math at the senior levels. It seems that schools are seeking to boost their standing on league tables by encouraging students not to take "hard" subjects like mathematics, in favour of easier subjects in which they are assured good grades. The result is Universities being forced to provide remedial math classes for science students who haven't done math for two years. The BBC provides a comparison between Chinese and UK university entrance tests — a comparison that makes the UK look woefully behind. Is the UK slipping behind in science education?"
top

Coryoth Coryoth writes  |  more than 7 years ago

Coryoth writes "The BBC is reporting on accusations that UK students are being encouraged to drop mathematics and other "hard" courses so that schools can look better on league tables. Instead students are encouraged to take what are percieved to be easier courses in which they can earn better grades, and hence raise the school's standings. In response UK Universities are having to provide remedial math courses for the incoming science students who have not taken upper level math courses at high school."
top

Coryoth Coryoth writes  |  more than 7 years ago

Coryoth writes "The n-Category Café has a discussion on Topos Theory in physics, prompted by a recent New Scientist article. Both articles discuss the current attempts to provide a better framework for quantum mechanics, and in particular quantum logic, through Topos theory. The idea is to find a topos, and associated logic, that better describes quantum logic. Any such higher order logic in a topos will, however, be formed as an Intuitionistic Type Theory. So, the real question is, what does Quantum Type Theory look like?"
top

Coryoth Coryoth writes  |  more than 7 years ago

Coryoth writes "While such unscientific polls should be taken with a grain of salt, the BBC is reporting that in a surprise result Serenity finished first, ahead of Star Wars for the title of best sci-fi movie in an SFX magazine poll. If nothing else it does show that at the moment Firefly fans are more ardent and more vocal in their support for their favourite Science Fiction universe."
top

Coryoth Coryoth writes  |  more than 7 years ago

Coryoth writes "Design by Contract, writing pre- and post-conditions on functions, seemed like straightforward common sense to me. Such conditions, in the form of executable code, not only provide more exacting API documentation, but also provide a test harness. Having easy to write unit tests that are automatically integrated into the inheritance hierarchy in OO languages "just made sense". However, despite being available (to varying degrees of completeness) for many languages other than Eiffel, including Java, C++, Perl, Python, Ruby, Ada, and even Haskell and Ocaml, the concept has never gained significant traction, particularly in comparison to unit testing frameworks (which DbC complements nicely), and hype like "Extreme Programming". So why did Design by Contract fail to take off?"
top

Coryoth Coryoth writes  |  more than 7 years ago

Coryoth writes "Following up a previous story, it seems that the Kentucky effort to provide increased pay to teachers with qualifications in mathematics, physics, and chemistry has been gutted. Teachers objected to differential pay, and that portion of the bill was removed. At the same time California has just put forward a similar measure, with differential pay for teachers qualified in mathematics and science. Shockingly 40% of mathematics teachers in California are not fully qualified in the subject — a higher percentage of unqualified teachers than any other subject. Is the Californian effort any more likely to succeed, or is it destined to be similarly gutted? Is there a solution to the woeful lack of qualified mathematics teachers that the Teachers' Union will find acceptable?"
top

Coryoth Coryoth writes  |  more than 7 years ago

Coryoth writes "While California is suffering from critical shortage of mathematics and science teachers, Kentucky is considering two bills that would give explicit financial incentives to math and science students and teachers. The first bill would provide cash incentives to schools to run AP math and science classes, and cash scholarships to students who did well on AP math and science exams. The second bill provides salary bumps for any teachers with degrees in math or science, or who score well in teacher-certification tests in math, chemistry and physics. Is such differentiated pay the right way to attract science graduates who can make much more in industry, or is it simply going to breed discontent among teachers?"

Journals

top

If We Taught English the Way We Teach Mathematics

Coryoth Coryoth writes  |  more than 7 years ago Imagine that your only contact with "English" as a subject was through classes in school. Suppose that those classes, from elementary school right through to high school, amounted to nothing more than reading dictionaries, getting drilled in spelling and formal grammatical construction, and memorizing vast vocabulary lists -- you never read a novel, nor a poem; never had contact with anything beyond the pedantic complexity of English spelling and formal grammar, and precise definitions for an endless array of words. You would probably hate the subject.

You might come to wonder what the point of learning English was. In response perhaps the teachers and education system might decide that, to help make English relevant to students, they need to introduce more "Applied English". This means teaching English students with examples from "real life" (for varying degrees of "real") where English skills are important, like how to read a contract and locate the superfluous comma. Maybe (in an effort by the teachers to be "trendy") you'll get lessons on formal diary composition so you can better update your MySpace page. All of that, of course, will be taught using a formulaic cookbook approach based on templates, with no effort to consider underlying principles or the larger picture. Locating the superfluous comma will be a matter of systematically identifying subjects, objects, and verbs and grouping them into clauses until the extra comma has been caught. Your diary will be constructed from a formal template that leaves a few blanks for you to fill in. Perhaps you might also get a few tasks that are just the same old drills, just with a few mentions of "real world" things to make them "Applied": "Here is an advertisement for carpets. How many adjectives does it contain?".

In such a world it wouldn't be hard to imagine lots of people developing "English anxiety", and most people having a general underlying dislike for the subject. Many people would simply avoid reading books because of the bad associations with English class in school. With so few people taking a real interest in the subject, teachers who were truly passionate about English would become few and far between. The result, naturally, would be teachers who had little real interest in the subject simply following the drilling procedures outlined in the textbooks they were provided; the cycle would repeat again, with students even worse off this time.

And yet this is very much how mathematics tends to be taught in our schools today. There is a great focus on the minutiae of the subject, and almost no effort to help students grasp the bigger picture of why the subject might be interesting, and what it can say about us, and about the world. Mathematics has become hopelessly detail oriented. There is more to mathematics than mindlessly learning formulas and recipes for solving problems. And just like our imaginary example, the response to students lack of interest in mathematics has only served to make the problem worse. The "applications" and examples of using the mathematics in the "real world" are hopelessly contrived at best, and completely artificial at worst, and still keep a laser like focus on formulas and memorizing methods without ever understanding why they work.

Of course the opposite situation, with no focus on details, can be just as bad. Indeed, that is where English instruction finds itself today, with students never learning the spelling, formal grammar, and vocabulary needed to decently express the grand big picture ideas they are encouraged to explore. What is needed is a middle ground. Certainly being fluent in the basic skills of mathematics is necessary, just as having a solid grounding in spelling and grammar is necessary. What is lacking in mathematics instruction is any discussion of what mathematics is, and why mathematics works as well as it does.

The discovery and development of mathematics is one of the great achievements of mankind -- it provides the foundation upon which almost of all modern science and technology rests. This is because mathematics, as the art of abstraction, provides us the with ability to make simple statements that have incredibly broad application. For example, the reason that numbers and arithmetic are so unreasonably effective is that they describe a single simple property that every possible collection possesses, and a set of rules that are unchanged regardless of the specific nature of the collections involved. No matter what collection you consider, abstract or concrete, it has a number that describes its size; no matter what type of objects your collections are made up of, the results of arithmetic operations will always describe the resulting collection accurately. Thus the simple statement that 2 + 3 = 5 is a statement that describes the behaviour of every possible collection of 2 objects, and every possible collection of 3 objects. Algebra can be viewed the same way, except that instead of abstracting over collections we are abstracting over numbers: elementary algebra is the combination of objects that represent any possible number (as numbers represent any possible collection with the given quantity), and the set of arithmetic rules for which all numbers behave identically. Numbers let us speak about all possible collections, and algebra lets us speak about all possible numbers. Each layer of abstraction allows us to use an ever broader brush with which to paint our vision of the world.

If you climb up those layers of abstraction you can use that broad brush to paint beautiful pictures -- the vast scope of the language that mathematics gives you allows simple statements to draw together and connect the unruly diversity of the world. A good mathematical theorem can be like a succinct poem; but only if the reader has the context to see the rich connections that the theorem lays bare. Without the opportunity to step back and see the forest for the trees, to see the broad landscape that the abstract nature of mathematics allows us to address, it is rare for people to see the elegance of mathematical statements. By failing to address how mathematics works, how it speaks broadly about the world, and what it means, we hobble children's ability to appreciate mathematics -- how can they appreciate something when they never learn what it is? The formulas and manipulations children learn, while a necessary part of mathematics, are ultimately just the mechanics of the subject; equally important is why those mechanics are valuable, not just in terms of what they can do, but in terms of why they can do so much.

So why is it that this broader view is so rarely taught? There are, of course, many reasons, and it is not worth trying to discuss them all here. Instead I will point to one reason, for which clear remedies to exist, and immediate action could be taken. That reason is, simply, that far too many people who teach mathematics are unaware of the this broader view themselves. It is unfortunately the case that it is only at the upper levels of education, such as university, that any broader conception about mathematics becomes apparent. Since it is rare for people going into elementary school teaching to take any university level mathematics, the vast majority of elementary teachers -- the math teachers for all our children in their early years -- have little real appreciation of mathematics. They teach the specific trees outlined in textbooks, with no real idea of forest. A simple but effective measure that could be taken is to provide stronger incentives and encouragement for prospective elementary school teachers to take extra math; whether it takes the form of courses, or math clubs, doesn't matter, the aim is to get teachers more involved and better exposed to mathematics in general so that they can become familiar with the richer world beyond the specific formulas and algorithms. This exact approach was tried in Finland as part of their LUMA project starting in 1992. As a result the number of teachers graduating with higher level had increased dramatically by 1999. And the results are also clear: Finland finished first, showing continued improvement in mathematics and science, in the 2003 PISA survey of the reading, math, and science skills of 15-year-olds in OECD countries (Finland finished second, just behind Hong Kong, in the mathematics section). Finland has continued to do extremely well in other more recent (though less major) studies.

Whether you view mathematics as an important subject or not, it is hard to deny that, currently, it is being taught poorly in many countries around the world. With such scope for improvement, and clear examples such as Finland showing the way, isn't it time that we took at least some of the obvious steps toward improving the quality of mathematics education?

top

Better Programming in Java

Coryoth Coryoth writes  |  more than 8 years ago

Sometimes ideas don't get the attention that they deserve. The Java Modelling Language (JML) is just such an idea. JML is something every Java programmer should know about. That does not mean every project in Java should make use of JML, only that developers should be aware of it so they can use it wherever it does make sense. The fact is that JML provides so many benefits for so little extra work that it deserves far more use than it gets. If you program in Java it could make your life a lot easier.

What is JML?

In a sense JML is just a way to document your code better. JML provides simple intuitive syntax to provide concise descriptions of behaviour in a Java module. JML uses annotations to document all the mundane things you would normally document. For example you might want to note which fields are allowed to ever be null, or whether a particular method requires one of its parameters to be greater than zero, or if a method is side-effect free. What JML really offers, however, is the tools to make use of these extra comments to save you time and effort later. JML simply asks you to do what you would normally do, and in return offers improved productivity when it comes to documenting, testing, and debugging your code.

So what exactly does JML look like? Here's a simple toy example that, while contrived, manages to demonstrate a lot of basic JML and will be useful later for demonstrating what tools that use JML can do.

public class Example {
 
    public int modifiable = 0;
    private /*@ spec_public @*/int additional_value = 5;
    private /*@ spec_public non_null @*/ String current_substring;
 
//@ invariant additional_value >= 5;
//@ invariant current_substring.length() == modifiable;
 
//@ requires ((String) o).length() > modifiable;
//@ assignable current_substring;
    public void castExample(/*@ non_null @*/ Object o) {
        additional_value = o.hashCode()%5;
        current_substring = ((String) o).substring(0, modifiable);
    }
 
//@ requires x >= -5;
//@ ensures \result > 0;
    public /*@ pure @*/ int pureFunction(int x) {
        return x + additional_value;
    }
 
//@ assignable modifiable;
//@ signals (ExampleException e) additional_value <= 4;
    public void unreachable() throws ExampleException {
        if (additional_value > 4 && modifiable > 0) {
            modifiable++;
        }
        else if (modifiable == 0) {
            throw new ExampleException();
        }
        else {
//@ unreachable;
        }
    }
}

Most of the JML annotations should be clear: We are simply documenting constraints on inputs (requires clauses) and outputs (ensures clauses, noting when fields or parameters cannot be null or when code is (we presume) unreachable, and what invariant properties that we expect the objects to have. A little less immediately obvious are the assignable and signals clauses. An assignable clause declares what is allowed to be written/assigned to from within the method, and a signals clause declares conditions that must be met in the case of an exception being thrown. Finally spec_public simply declares that these private fields have public visibility to the specification. You may note a few errors where the specification and the code don't agree - we'll come to those shortly.

This is, of course, only a small sample of what JML offers. JML provides a lot of useful syntactic sugar to make stating your intentions as easy and natural as possible. For a proper introduction to JML there are some presentations on basic JML and more advanced JML. Full documentation can be found in the JML Reference Manual.

What do I get out of it?

JML provides an easy to use language with which you can concisely say how you intend your code to behave. More importantly it provides tools to make full use of that. For the small amount of extra effort of documenting your intentions with JML you get a lot of easy gains.

The JML tools

The JML distribution comes with a number of tools that use JML annotations. The four most useful are jmldoc, jmlc, jmlrac, and jmlunit. The first, jmldoc performs the same function as Javadoc, producing almost identical API documentation. The advantage of jmldoc is that it understands JML annotations and includes suitable extra information in the API documents based on the annotations. As long as you document requirements and guarantees for modules in JML you can be sure that the API documentation will always include those constraints. Furthermore because JML annotations become integral for testing and debugging it is far more likely that the annotations are kept up to date with the code, meaning that documentation is also more likely to be up to date.

The tools jmlc and jmlrac provide a JML aware compiler and runtime environment. The advantage of compiling and running code with such tools is that JML annotations are converted into assertions which are checked at runtime, making it easier and faster to discover and locate bugs. For the small cost of writing a few annotations you can significantly speed up the testing and debugging cycle.

You can get the similar results by liberally sprinkling assertions through your code, but JML offers several advantages over such a scheme:

  1. JML provides a lot of syntactic sugar like universal and existential quantifiers (\forall and \exists), and logical connectives like "implies" (==> ) and "if and only if" ().
  2. JML elegantly handles inheritance. When overriding a method you will have to rewrite all your assertions, while JML annotations are automatically inherited. If you want to add annotations to those inherited by an overridden method JML provides the also keyword.
  3. As JML annotations are comments, and only converted into runtime assertions when you use the jmlc compiler, turning off runtime checking is as easy as compiling with a standard Java compiler. Using assertions will have you jumping through hoops to arrange anything similar.
  4. JML annotations can be automatically included into your JavaDoc documentation. If you use assertions you'll have to repeat the information elsewhere to get it included in the documentation

Of course having assertions based on intended behaviour is no substitute for a decent unit testing framework. JML has an answer for this too however. The jmlunit tool can read a Java module with JML annotations and automatically generate a JUnit test class based on the annotations. For example running jmlunit over our example above creates two new files Example_JML_Test.java and Example_JML_TestData.java. The first of those files is a JUnit test suite, and the second is used to generate test data. JML attempts to come up with a reasonable test data generation strategy, but leaves space for you to enter your own set of test data instead should you prefer.

Extended Static Checking

Extended static checking is where JML annotations begin to truly come into their own. ESC/Java2 is an extended static checking tool for Java that uses JML annotations to catch errors before you even compile, let alone run, the code. You can think of it as type checking on steroids - using the information in the annotations to warn you about a wide variety of potential errors. It runs about as fast as compilation, so there's no waiting for results.

Consider what happens when we run the example above through ESC/Java2. We get a series of warnings (some not revealed until others have been corrected):

  1. We are warned that current_substring is uninitialized and could be null
  2. We are warned that additional_value is modified when it shouldn't be (line 14)
  3. We are warned that the cast made in castExample may be invalid
  4. We are warned that the postcondition for pureFunction may be violated (it's possible that x = -5 while the invariant only ensures that additional_value is at least 5, thus 0 could be returned).
  5. We are warned that an exception may be thrown in unreachable without the conditions being met
  6. We are warned that the //@ unreachable portion of code is, in fact, reachable
  7. We are warned that the invariant current_substring.length() == modifiable can be violated.

Most of the errors listed above are either errors in code, or in our expectations of what the code will do. Some of these errors can be remedied by strengthening the constraints. For example we can add //@ requires o instanceof String to castExample to fix the warning, but at the cost that any code calling that method must meet that precondition (though the API documentation will state the precondition). Other errors can only be corrected in the code itself.

While the errors are necessarily simple in this toy example, hopefully this gives you the flavour of what ESC/Java2 can do for you - it has very powerful algorithms and scales to remarkably complex code, easily finding errors that would be hard to spot by inspection. For more examples of what ESC/Java2 can do for you (and it is well worth reading) try this introduction to ESC/Java2. ESC/Java2 offers a robust and powerful tool for finding subtle errors in your code earlier when they are easier to fix.

Conclusions

JML is simple to use: it has an expressive and intuitive syntax, and you can write the annotations at the same time the code. JML is also flexible: you an use as much or as little in the way of annotations as you need. Perhaps you simply want to add some pre- and post-conditions, or just keep track of which methods can access or alter certain fields; perhaps you want to use the full power of the theorem provers in ESC/Java2. Most of all JML makes many aspects of development, from debugging and testing to documenting, easier and more efficient, all for very little extra effort. If you program in Java for a living then JML can almost certainly make your code better, and your life easier.

top

A Case for Formal Specification

Coryoth Coryoth writes  |  more than 9 years ago

Formal Specification helps build more robust, more maintainable software with fewer bugs and defects. It has a long history, but it is still a developing field. While it may not be suitable for all software projects, a case can be made that there are many projects not currently using formal specification that stand to benefit from it. As the methods and tools for formal specification develop it is increasingly becoming something that developers and software engineers should learn to use to their advantage.

What is Formal Specification

Formal specification is simply a matter of being more explicit and specific in defining the requirements of software. At the simplest level this can take the form of Design by Contract, where functions and procedures specify pre- and post-conditions and loops and objects include a set of invariant properties. In the more rigorous case formal specification involves building an explicit mathematical definition of the requirements of the software. Using such a definition one can prove the correctness of the system, or simply prove theorems about properties of the system. An implementation can also be checked against such a formalised specification, verifying that the implemented code does indeed do precisely what the requirements claim. At the most rigorous level the initial formal requirement specification can be expanded, through (mathematically rigorous) refinement, to ever more specific and detailed specifications resulting eventually in executable code.

All of these different levels allow a significantly greater degree of analysis of the software, be it improved static and runtime checking with contracts, to more complex data flow analysis and proof with more complete specifications. In the same way that static types allow more rigorous checking at compile time, catching a lot of simple errors, contracts and specifications allow even more analysis and checking, catching even more errors at the early stages of development when they are most easily and efficiently fixed.

A Simple Example

For good examples of formal specification at work I would suggest you try this Z case study or the case study for CASL in the CASL user manual. For those who want something more immediate I have a warning: To be short enough to present easily in this space the specifications are necessarily very simple and should be taken as a sample of the flavour of formal specifications rather than a serious example.

The specification for a simple stack, in SPARK, a version of Ada that adds formal specification semantics, would look something like the following:


package Stack
--# own State: Stack_Type;
--# initializes State;
is
        --# type Stack_Type is abstract

        --# function Not_Full(S: Stack_Type) return Boolean;
        --# function Not_Empty(S: Stack_Type) return Boolean;
        --# function Append(S: Stack_Type; X: Integer) return Stack_Type;

        procedure Push(X: in Integer);
        --# global in out State;
        --# derives State from State, X;
        --# pre Not_Full(State);
        --# post State = Append(State~, X);

        procedure Pop(X: out Integer);
        --# global in out State;
        --# derives State, X from State;
        --# pre Not_Empty(State)
        --# post State~ = Append(State, X);
end Stack;

The package body, containing the implementation, can then follow.

The own simply declares the variable State to be a package global variable, and initializes means that the variable State must be initialized internally in this package. We then declare an abstract type (to be defined in implementation) and some simple functions. The in and out keywords in the procedure declarations tag the parameters: in means that the parameters current value will be read in the procedure, and out means the parameter will be written to in the procedure. The global keyword declares the the package global variable State will be used in the procedure (and both read from and written to). The derives keyword provides explicit declarations of which input will be used in determining values for variables that will be output or written to. The pre- and post-conditions should be largely self explanatory.

As you can see, mostly all we are doing is making explicit exactly how we intend the procedures to operate. This specificity provides automated verification tools the information necessary to properly analyse and validate implemented code: Prior to a compile step you can run a verification and flow analysis tool that catches many subtle errors.

A similar object, specified in an algebraic specification language like CASL might look something like this:


spec Stack[sort Elem] =
        sort
                stack
        ops
                empty: stack
                push: stack * Elem -> stack
                pop: stack ->? stack
                top: stack ->? Elem
        axioms
                not def pop(empty)
                not def top(empty)
                forall s : stack; e : Elem
                          pop(push(s,e)) = s
                          top(push(s,e)) = e
end

CASL offers syntax that can compact this considerably, but this longer version makes the workings of the specification more clear.

We declare a specification for a stack of generic elements, of sort stack; and with operations empty (which is in a sense the constructor creating and empty stack); push which maps a stack and an element to a "new" stack; pop a partial function (denoted by the question mark) which maps a stack to "new" stack; and top another partial function which maps a stack to an element. We then define axioms: first, as pop and top are partial functions, we say they are not defined on empty stacks; secondly we declare that pop and top should behave as (partial) inverses for push for all stacks and elements.

This is sufficient information to completely specify stacks as a universal algebra and brings a great deal of powerful mathematical machinery to bear on the problem, much of which has been coded into a set of analysis tools for CASL.

While these examples are very simple, they should give you the flavour of how formal specification can work. Both SPARK and CASL support structured specifications, allowing you to build up libraries of specification, making them scalable to very large problems.

Why Use Formal Specification?

Why use static types? Why do any compile time checking? Why produce design documents for your software? Not every project is worth writing design documents for, and some projects are better off being developed quickly using a dynamically typed language and runtime checking. Not every project really needs formal specification, but there are a great many software projects that could benefit greatly from some level of formal specification - a great many more than make use of it at present.

I would suggest that it is probably unimportant whether your paint program, music player, word processor or desktop weather applet uses formal specification. What about your spreadsheet application? Bothering with formal specification for the GUI might be a waste of time. Bothering to do some specification for the various mathematical routines, ensuring their correctness, would potentially be worth the extra trouble. Formal specification doesn't need to be used for a whole project, only those parts of it that are sensitive to error. Likewise any network services could easily benefit from formal specification on the network facing portions of the code to significantly reduce the possibility of exploits: it is far easier to audit and verify code that has been properly specified. Security software, and implementations of cryptographic protocols, are far safer if formally specified: with cryptography the protocols are often rigorously checked, and many exploits relate to errors where the implementation fails to correctly follow the protocol. Finally mission critical business software, where downtime can costs millions of dollars, could most assuredly benefit form the extra assurances that formal specification and verification can provide.

None the less, barring a few industries, formal specification has seen little use in the past 25 years. Developers and software engineers offer many excuses and complaints as to why formal specification isn't suitable, isn't required, or is too hard. Many of these complaints are founded in a poor understanding of how formal specification works and how it can be used. Let's consider some of the common complaints:

But Formal Specification is Too Much Work...

Formal specification can be as much work as you choose to make it. You can do as little as adding contracts to critical functions or classes, or you can write the entire project from the top down by progressive refinement of (and verification against) a specification for which you have created formal proofs of all important properties. There is a sliding scale from a dynamically typed script with no documentation or comments, all the way up to a completely explicitly specified and proven system. You can choose the level of specificity and verification, and you can specify as much or as little of the system as you need. Formal specification covers everything from adding contracts to a couple of critical routines to complete specification of the entire project.

Formal specification isn't necessarily significantly more work than you do now. Writing contracts is only a little more work than static typing: you are simply declaring formally what you intend the function to do, something you ought to have a clear idea of before you write the function anyway. Writing a basic specification is only a little more work that writing a design document: you are simply formalising the diagrams and comments into explicit statements. For a little extra work in the initial stages you gain considerably in debugging and testing. Extended static checking based on contracts can catch many simple errors that would otherwise go unnoticed, and debugging is significantly accelerated by the added information contracts supply about runtime errors. If you've gone to the trouble of writing a formal specification, you can statically verify properties of the system before it is even implemented. Once implemented you can validate the implementation against the specification. When performing testing you can use the specification to help determine the best coverage with the least testing.

Formal specification does work in practice, and the gains in efficiency in testing and debugging can often outweigh the potential extra overhead during design. Design by Contract, for instance, is often cited as saving time in the development process, and there are many real world success stories where systems were developed faster by using contracts. A 20 person team completed in 5 months what took a team of 100 people using standard C++ almost a year. For projects where robustness is critical, full formal specification has been used successfully. Companies like Praxis High Integrity Systems use formal methods successfully in the rail, aerospace, nuclear, finance and telecommunications industries (among others), and have sold their formal method tool-set to a variety of large companies. Reading through some of their recent work and case studies makes it clear that formal specification and verification can prove to be more efficient and faster to use.

But Formal Specification is Not Practical...

On the contrary, formal specification is used all the time in many industries, and there are various companies like Praxis High Integrity Systems, B-Core, Eiffel Software, Kind Software, and Escher Technologies who provide formal specification tools as a major part of their business.

Formal specification has proved to be practical in the real world. What it has lacked is mature tools to make development using formal specification faster and easier (try writing code without a good compiler; try doing formal specification without good static checking tools), developer awareness, and developers skilled in the languages and techniques of formal specification. It doesn't take much to learn - learning a specification language is not much harder than learning a programming language. Just as programming appears to be a difficult and daunting task to a person who doesn't know any programming languages, specification looks difficult and impractical to people who don't know specification languages. Take the time to learn and your perspective will change.

But the Project Requirements are Always Being Changed...

It is true that many projects face ever changing requirements. The question is not whether this renders formal specification useless (it doesn't) but how formal specification compares to informally specified software in such a situation.

First of all it should be noted that engaging in formal specification can help reduce the problem to begin with. By requiring you to rigorously specify at least some portions of the software, formal specification can assist in finding ambiguities, cases that were not covered, and other problems much earlier in the development cycle when changes are easiest to make. Those parts of the system, such as GUIs, that are often the target of more whimsical design or requirements changes are precisely the parts that benefit least from, and are least likely to use, formal specification. Changes to design or requirements for those parts of the system, then, are not likely to be more detrimental than for informally specified projects.

Secondly formal specification is quite capable of dealing with change, and even offers benefits in the face of changing requirements. A change in requirements results in a change in specification, but because the specification is formally written the impact of the changes can be analysed. You can check that new requirements remain consistent with the rest of the system. You can also determine all parts of the system that will be impacted by the change and avoid bugs introduced by the changes. Having a formal specification for the system when a change in requirements occurs makes it easier to see how the change will effect the system and easier to make the changes consistently and completely. Formal specification can make changes in requirements easier to deal with, not harder.

But I Already do Unit Testing...

Unit testing is fantastic, and there's no reason to stop just because you can write proofs for some aspects of your code. Unit testing is not a replacement for formal specification however. Testing only verifies the component for a very limited and specific range of cases. Specifically testing covers the cases that the developer thought of. A great many bugs, however, are those that occur from situations the developer didn't consider, and probably wouldn't have included in his test. You can improve the situation with testing tools like fuzzers, but you're still only checking a sampling of the whole space. Formal specification, on the other hand, allows construction of formal proofs that cover all possible cases. I would much rather specify and prove Pythagoras' theorem than simply assume it is true by testing it on random right angled triangles. Likewise, while testing is valuable (and easy) when assurance matters it is no replacement for proof based on a specification.

Does this mean you should give up Unit testing? In no way shape or form. Just as formal proofs in mathematics are much harder than trying a few cases, formal proofs from specifications require significant work - you most likely can't prove everything. That means the best method is prove those properties that are critical, and continue testing the system much as you normally would. Hopefully you'll see a lot less errors to debug based on the extended static checking of course.

But Formal Specification Still Can't Guarantee Perfect Software...

Correct, there are no guarantees. Complete proofs are often too complicated to perform, so theorem proving for specific properties is usually undertaken instead. Static checking can be enhanced significantly with suitable specifications, but not all errors can be caught. Your software is only going to be as stable and secure as the system it is running on. If you can't be perfect why try at all? Because significant gains can be made and the costs may be smaller than you think. Even if the whole system can't be proved, managing to prove key properties may well be all the assurance of correctness that is required for the application at hand. Even if you can't catch all the errors statically, reducing the number and severity of runtime errors can bring significant efficiency gains in the testing phase. Furthermore the cost of errors in software found after release can be sufficiently high that any reduction is worthwhile. Finally if you can't trust your compiler and operating system, then everything is compromised. At some level you have to trust something - we all trust a great deal of software right now that we expect to have flaws - reducing the source of errors to trusted components is still a significant gain.

But Formal Specification Is Only Viable for Trivial Examples...

Not even close to the truth. Formal specification and formal methods have been used for large projects involving hundreds of thousands of lines of code. Just look at many of the examples provided above: these were not small projects, and they were not toy trivial examples; these were large scale real world software systems. Examples of formal specification tend to be small simplistic cases because they are meant to be just that: easily comprehensible examples. In the same way that programming languages often use "Hello World" and quicksort as examples, formal specification languages use equally simple but demonstrative examples. Don't take this to mean that the language and method doesn't scale.

Conclusions

Formal specification isn't a silver bullet, and it isn't the right choice for every project. On the other hand you can use as much specification and verification as is suitable for your project. You can apply formal specification only to the portions that are critical, and develop the rest normally. More importantly, the techniques, languages and tools for formal specification continue to improve. The more powerful the methods and the tools, the more projects for which formal specification becomes a viable option. In an increasingly network oriented computing world where security and software assurance is becoming increasingly important, the place for formal specification is growing. If you are a software engineer it makes sense to be aware of options available to you.

top

Elephants in the Living Room

Coryoth Coryoth writes  |  more than 9 years ago

The United States is facing the possibility of a severe economic correction. Yet most of the causes of such a correction are, for the most part, being completely ignored in preference to partisan bickering. It is the proverbial elephant in the living room - except it is not just one, but several elephants that everyone is doing their best to ignore. While the likelihood that any of these issues could result in disaster is low, these are issues worth taking the time to discuss.

The first elephant is debt. There are 3 kinds of debt that are of concern: Household debt, the budget deficit, and the current account or trade deficit. Of those three, it is only the budget deficit that gets any real attention, and even then it is often brushed aside.

The current US budget deficit stands at $413,000,000,000 in the 2004 fiscal year. This is a record deficit. To put it in perspective, that's 3.6% of the Gross Domestic Product, almost a quarter of total federal spending, or 80% of the total receipts from Federal income taxes. Clearly such a situation is not good. But there are deeper implications for such a large fiscal gap. Some economists argue that such large deficits are detrimental to private investment. The deficit effectively soaks up a large portion of the US's savings, which would otherwise be invested back into the private sector. There is historical precedent for this - investment fell to record lows during the previous record budget deficits under President Reagan. But there is something else to consider as well. This debt is managed by selling Government bonds and securities. These securities can be, and often are, sold overseas.

Currently, the two largest foreign owners of US Government debt are Japan and China. Japan had an economic boom, but collapsed in the 90s and has been struggling since. The Japanese government is very conservative, and has been remarkably slow to introduce reforms that are seen by many as necessary to stimulate the Japanese economy. Recently however, the Japanese Prime Minister, Junichiro Koizumi, has been fighting to push through the necessary reforms, with some success. In the last couple of years the Japanese economy has shown signs of improvement. Meanwhile, the Chinese economy is strong, and only getting stronger. Why is this important? Because as both of these economies gear up there will be far less interest in investing overseas, and much greater emphasis in investing the the local growing economy. This would essentially amount to a mass sell off of US debt in the form of government bonds, something that would almost certainly fuel higher inflation in the US, putting pressure on the Federal Reserve to take action by increasing interest rates. That is to say, it would almost certainly lead to a large scale recession in the US.

The US current account deficit stood at $530,668,000,000 in 2003 and $313,341,000,000 for just the first two quarters of the 2004 financial year. That's a record figure for each of the first two quarters of 2004. This could represent a country living beyond its means, or it could represent an economic power attracting large amounts of foreign investment. Certainly as long as the US remains a significant economic powerhouse it can sustain high current account deficits. That is, as long as the US remains an attractive place for foreigners to invest, an imbalance is of limited concern. Whether such a high current account deficit is sustainable is a complicated issue affected by many factors. A reasonably coherent explanation of some of those factors can be found in a 2002 paper by Catherine Mann. In rough précis, the current account deficit is balanced by private savings, but widened by budget deficits, yet at the same time is driven by the attractiveness of US investments to foreigners. Should the current account grow too large, the perception of the ability of the US to repay the investment may decrease, causing an economic correction. In the conclusion of her paper Mann indicates that a change in trajectory (from growing to shrinking) is inevitable, and the concern is whether this will occur through slow structural and policy changes, or whether it will be caused by a sharp correction in overseas investment. Two of the major requirements she lists for structural change are greater fiscal discipline (resulting in budget surpluses) and increased personal savings. Since the paper was published in 2002 we have seen massive increases in the budget deficit, as just discussed. At the same time, the prospects for increased personal savings are very limited (which I will touch on in a moment). Add to that the beginnings of resurgent Asian economies attracting investment, and the risk of a sharp correction is certainly much greater. The consequences of a sudden shift in global investment away from the US would be extremely rapid depreciation of the US dollar, most certainly resulting in considerable economic hardship in the US.

US household debt stood at $8,454,400,000,000 in 2002, and has grown since. A quick look at the associated chart shows just how serious the upward trend in household debt is. This debt is driven both by mortgages, and by credit card debt in an increasingly consumerist society. Do you recall the "Shop for America" campaigns following September 11? It is exactly that kind of thinking that helps to drive the consumer society even further into debt. In 2004 household debt increased 4 times faster than the economy, and average credit card debt for households with at least 1 credit card increased 300%, to over $9000. Of course this is not necessarily crippling, as a recent speech by Alan Greenspan points out. It is, however, trending in the wrong direction, and getting worse fast. It is certainly far from the increase in household and personal savings required to help curb the current account deficit.

The concern about debt for the US is, in the end, quite simple. US debt, in all three forms, is huge, and it is only getting larger. All three forms of debt, while different, are connected in that both household debt and government debt have a significant influence of the sustainability of such a large (and growing) current account deficit. At the same time, either the current account deficit or the budget deficit, if they continue to grow, could easily trigger a rapid and severe depreciation of the US dollar. Which brings us to our next elephant.

The second elephant is the US Dollar. At the time of writing, the US Dollar is running at about 0.77 Euros to the Dollar. One could claim that this is simply due to a strong Euro, but in reality most world currencies, including the Japanese yen and the Great British pound are trading strongly against the US Dollar. A quick look at the historical record of the US Dollar against the Euro, the yen, and the pound shows a distinct downward trend over the last two years in all cases. Of course this need not be seen as a bad thing, certainly it is beneficial to US exports, an increase in which would be highly beneficial for the current account deficit. There are potential issues however. The US dollar has, to some extent, remained as strong as it has due its position as the de facto global currency, in which most major commodities, including oil and gold, are traded. The Federal Reserve estimated that in 2003 around $400 billion of the $680 billion US dollars in circulation were held outside the United States. This high demand for US Dollars overseas is an effective prop for the US Dollar, meaning it is unlikely to ever fall too low too quickly. This prop could, however, disappear. The possibility of the Euro becoming a new alternative global currency is increasing.

In 2001 Alan Greenspan gave a cogent speech on the possibilities of the Euro as a global currency. The first salient point is the fact that a global currency tends toward a natural monopoly - as use increases, it becomes an increasingly attractive currency to hold, while the decreasing liquidity of competing currencies makes them less and less desirable as a global currency. Transitions can of course be slow, for example the transition from the Pound Sterling to the US Dollar between the world wars, but once it begins it becomes inevitable. Greenspan then notes that, at least on the surface, the Euro possesses all the traits required of a global currency (a stable currency based in a strong economy with a well developed financial system and deep, liquid financial markets). Greenspan continues by discussing reasons why the US Dollar remained so dominant after the introduction of the Euro. He cites the strength of the US Dollar against the Euro (the Euro depreciated against the US Dollar in its first two years after introduction), the strength of the US economy on comparison to the EU, and the Euro's apparent inability to expand into foreign equity markets. As already noted, the strength of the US dollar against the Euro, and in fact most world currencies, has been in decline. On the other hand, while the relative strength of the EU economy to that of the US has increased during the US recession, the US economy is beginning to show signs of increasing growth. Lastly, however, the Euro is now beginning to extend into foreign equity markets, most notably oil. An increasing amount of Middle Eastern oil is being traded in Euros, and while the US Dollar remains dominant, both Iran and Saudi Arabia have flirted with the idea of completely converting to Euros. Equally significantly, Russia, the second largest oil producer in the world, has expressed serious interest in trading their oil in Euros instead of US Dollars, though it has not yet embarked on such an en masse conversion. For now the US dollar comfortably remains the dominant player, but there are enough signs for concern, and as Alan Greenspan pointed out, a transition will have a tipping point, after which it will be carried by its own momentum.

The threat to the US economy is that this transition, if it occurs, may not be slow. Because the strength of the US dollar is currently supported in part by its position as a global currency, a shift toward the Euro could trigger further collapse of a weakening dollar initiating a panic driven feedback cycle resulting in an almost complete collapse of the US dollar from its current point of strength. Such a collapse will only be compounded by the fact that the US currently repays its debt in US Dollars - the only country in the world that is granted the privilege of paying off debt in its home currency. Should the US Dollar destabilise significantly (or the Euro establish itself as the global currency) many debt holders could choose to request payment in Euros. This would force the US to purchase Euros (at a markup) to service its debt, both reinforcing the Euro as global currency, an inflating US debt. Such a situation would only serve to exacerbate the US Dollar's collapse.

Of course ideally such a collapse would be halted by the closing current account deficit as the price of imports skyrockets, and US exports become ever more attractive. The US populace is, however, a heavily addicted consumer, more than willing to spend its way into increasing household debt (as already noted). Worse still, US exports have been on the decline despite the recent weakness of the US dollar. Increasingly, the US is importing goods from China, and services from India. Which leads us to another elephant.

The third elephant is the rise of India and China. Both the Indian and Chinese economies are growing very rapidly. These are the two most populous nations on earth, so they should not be taken lightly. Both countries are filled with young and talented people eager to make the most of their education and climb the global economic ladder. In the case of India this has taken the form of, for example, outsourced IT jobs from the US. Increasingly US companies are importing their IT service from India, where there is a vast pool of highly capable people who do not face the vast cost of living that their counterparts in the US do. In many ways this can simply be seen as globalization and free trade beginning to more evenly distribute the wealth of the world, and is not a threat as long as the US continues to innovate and create new industries for itself. There is a question as to whether this actually occurring however.

The most common statistic for measuring economic strength, Gross Domestic Product (GDP), shows the US economy to be in good shape. Current US GDP is between 10 and 11 trillion US dollars depending on whether you measure by Purchasing Power Parity (PPP) or Current Exchange Rate (CER). GDP does have problems, the most important to consider here being its ability to include work that produces no net gain, and its lack of consideration for negative externalities. It is worth considering how much work in the US is included in the GDP that has very minimal net gain. For those familiar with the SCO Group versus IBM court case, its worth noting that tens of millions of dollars have been spent, all counted toward GDP, and yet most observers would point out that, in comparison to the money spent, the gains are negligible if they exist at all. This is common to a surprisingly large number of other court cases in an increasingly litigious country. All of it counts toward GDP, much of it returns little if any gain. A more debatable issue is the current costs of management, particularly in larger corporations that have exceptionally high salaries for the many tiers of upper management - is the net gain provided by management actually comparable to the money being spent. It is certain that management provides significant value, what is unclear is exactly how much value, and how that compares to the salaries involved. As stated, this is a point for debate. The fear is that if, in fact, US GDP is inflated by such issues, the economy could find itself hollow when it comes time to compete in earnest.

Currently India is exporting low and mid level IT services to the US, but given current Indian growth, it is only a matter of time before India is in a position to cease selling its services piecemeal, and instead sell complete packages. At present, while a certain amount of work is being outsourced to India as their economy grows, management and the corporations have remained in the US. Given the growing number of capable and experienced IT workers in India however, it is inevitable that new companies will arise in India taking advantage of the considerably lower cost of living to compete head to head with US corporations for complete solutions. Again we have to question the ability of the US to forge ahead into new industries. It could be argued that ambition for education, science, and research is lower for the current youth of the US than for their would be competitors in India, China, Korea and Japan. While there is involvement from the US in the very ambitious fusion reactor project, it will most likely be sited in Europe, and if not there, then Japan. On the other hand it was a US based company, Scaled Composites, that recently won the Ansari X-Prize and looks to be at the forefront of commercial spaceflight. The future remains uncertain, but this is certainly an issue for concern.

Finally, the rapid growth in India and China is having other visible effects. China, in particular, with its rapidly growing manufacturing sector, has had an an equally rapid increase in its demand for oil, adding a new element to the global oil market. While China is currently trying to slow its economic growth to more sustainable levels, the growth in its hunger for oil is not expected to be similarly dampened. Oil prices are already being driven ever higher, and unlike the crises in the 1970's, it is not due to disruptions in supply, but instead simply due to demand. That this is detrimental to the worlds largest oil user, the US, is obvious. Of more concern is that with increasing Chinese oil needs stretching current oil production capacity to its limits, a disruption in supply now could be catastrophic. This issue is discussed in detail in an article by Paul Krugman. In the wake of ever higher oil prices, alternative energy sources may prove to be one of the most significant new industries in the coming decade. With this in mind, the question must be asked: Which countries are going to be at the leading edge of research in alternative energy sources?

There remains one significant issue to discuss: the domino effect. This is the fact that the three major issues so far discussed are all interconnected. Increasing strength in the Chinese and Indian economies, providing goods and services to consumption addicted Americans willing to go into personal debt to maintain their standard of living, could easily lead to a widening of the US current account deficit. If the current account deficit grows too large it could easily trigger significant depreciation of the US Dollar. Should the US Dollar start too look too volatile, or become a significant financial risk to hold (due to depreciation), global markets could easily start to embrace the far more stable Euro, potentially sending the US Dollar into free fall. During a period of such extreme uncertainty in the US Dollar, foreign investors may well seek to diversify their investments away from the US towards rapidly growing countries such as India or China. That is to say, any one of these issues could trigger the others, to a devastating end.

Much of what has been discussed is speculative. Far from being probable, the potentially disastrous outcomes outlined are somewhat unlikely. Furthermore, even if the worst did come to pass the US would rebound, and may well come back stronger than ever. The reason to consider these issues, despite the low probabilities, despite the eventual recovery, is that the possible effects could be so pernicious during the time required for such an economic correction to shake itself out. The severity of the possible outcome demands our attention. These are issues that US politicians should be discussing instead of quoting the standard divisive talking points about the usual false dichotomies. You won't hear these issues raised, however, because with these issues politicians can't give a soundbite as to how they'll spend some money, or make a law that will fix the problem. Ignoring the problem won't make it go away, and just because there are no easy solutions doesn't mean that nothing can be done. What is certain is that nothing will be done if people aren't aware of the problems.

Slashdot Login

Need an Account?

Forgot your password?