Beta

Slashdot: News for Nerds

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Tools To Automate Checking of Software Design

Zonk posted more than 8 years ago | from the looks-good-chief dept.

128

heck writes "Scientific American describes some of the work to develop tools for examining the design of software for logical inconsistencies. The article is by one of the developers of Alloy, but the article does reference other tools (open and closed source) in development. The author admits that widespread usage of the tools are years away, but it is interesting reading the approach they are taking regarding validation of design."

cancel ×

128 comments

too hard. (5, Insightful)

yagu (721525) | more than 8 years ago | (#15458140)

Back in the mid-80s I attended a seminar in Atlanta, it was about automated software engineering... and tools and workbenches that would take as input specifications and design parameters and would crank out entire suites of software/applications. (Heck, there was even a new acronym for it, can't remember what it was, but it was a hot, hot, hot button for a few years.) We were pretty much warned our careers were over, automation was here to generate what we as professionals had studied years to create.

It never happened. It never came close to happening. We are as far away today or further from tools that can generate applications transcendentally.

I was skeptical then, I'm skeptical now. Tools like the ones described are useful, but they're not foolproof, and they hardly supplant the intuition and "art" that is programming.

At best tools are an adjunct to the software development process, not to be a replacement for common sense testing and design and code walkthroughs. I could construct many scenarios that logically would be consistent but have no relationship to the desired end of the application, i.e., a logic consistency tool would not detect a problem. Any poorly designed system with these "new" tools applied will merely be rigorously poor systems.

As for the prime example (in the Scientific American article) of the Denver International Airport baggage handling debacle, I doubt logic analysis tools would have had much impact on the success or failure of that effort. I knew people involved in the project, and "logic consistency" in their software was the least of their problems. (I would have loved to been on a team to design and develop that system -- I think it was a cool concept, and ultimately VERY feasible... )

I did get one benefit from the Atlanta Seminar -- I got a football signed by Fran Tarkenton (he was CEO of one of the startup companies fielding a software generating workbench).

Re:too hard. (0)

Anonymous Coward | more than 8 years ago | (#15458154)

I was skeptical then, I'm skeptical now. Tools like the ones described are useful, but they're not foolproof, and they hardly supplant the intuition and "art" that is programming.

What you describe has nothing to do with the article.

Alloy is a complementary to UML, not some snake-oil IBM-like companies crap out frequently.

Re:too hard. (4, Insightful)

AuMatar (183847) | more than 8 years ago | (#15458194)

UML is snakeskin oil in and of itself. Yes, its occasionally useful to draw a diagram. Yes, a simple common syntax for those can be good. But thats about all its useful for- designing in UML is a joke (it just doesn't, and never can, portray all the information that text can) and automatic code generation is even more of a joke (at absolute best, it saves you 5 minutesof typing boilerplate class skeletons).

Re:too hard. (5, Insightful)

Anonymous Coward | more than 8 years ago | (#15458416)

I agree completely with the code-generation ... it's useless.

When I was young, stupid and without real-world experience, I also thought UML was crap. However, once you realize that the point of UML is not helping to design, but helping to communicate the design to other developers, it becomes useful (not ground-breaking, not amazing, not neccessary, just useful).

It's like that: when you design a piece of software alone, you can use whatever system you want to model it (keep it all in your memory, write class-structure as pseudo-code etc.). If you need to communicate your design to others, it's good to have a system with defined semantics to formulate the design (visualization is not even the issue here), and one such system is UML; surely not the best, but one that a sufficient number of developers (software engineers, not programmers) are trained to understand.

Lacking such a system of communication, you would first have to explain the meta-model (explain how the description of your design is to be understood) instead of just explaining the design-model itself.

Re:too hard. (0)

Anonymous Coward | more than 8 years ago | (#15459756)

Will somebody please mod parent up!

Re:too hard. (1)

the eric conspiracy (20178) | more than 8 years ago | (#15458424)

Yes, UML cannot portray all the information text can, especially when you are specifying business rules. Yet text can be a very confusing way of decribing a class heirarchy or set of states and transitions or relationships in a set of database tables. For this sort of information a diagramming language like UML is very very useful.

I also agree with you about code generation - it is a very minor feature of UML tools. What isn't so minor however is reverse engineering. Taking a legacy project and generating a set of class diagrams can be a powerful way of finding design problems quickly. Showing a spaghetti class diagram makes it pretty easy to convince a room full of developers that what they have wrought is in need of refactoring.

Blaming your tools? (1)

meburke (736645) | more than 8 years ago | (#15458830)

There you go, blaming your tools...

There are environments that have had great success implementing UML in their design process. My favorite example is from Nokia:

http://www.powells.com/biblio?isbn=0521645301 [powells.com]

I had trouble wrapping my head around UML until I read Jaaksi's books (there's more than one), and I've been using Rational Rose since 1996. I never found or developed a clear system for making it work until I read his books.

A book that clearly describes the case for code generation (and the limits) is this one:

http://www.amazon.com/gp/product/1930110979/104-33 75871-4590326?v=glance&n=283155 [amazon.com]

The author has apparently developed a network for the support of gode generation, and has some useful tools.

Now, what the world REALLY needs, is a method of making the UML tools in VISIO do what they are supposed to do...(Some tools ARE truly deficient!)

Mike

Re:too hard. (4, Insightful)

deathy_epl+ccs (896747) | more than 8 years ago | (#15459030)

at absolute best, it saves you 5 minutes of typing boilerplate class skeletons

The code generation from UML is only supposed to be the class skeletons, and I've got to ask... have you never written an application with more than a handful of classes? The time spent just building the skeletons for some of the applications I've written over the years has taken a helluva lot longer than 5 minutes.

I personally find class diagrams darned useful. I also find use case diagrams useful not because they help the programmer, but because they help to make sure that we correctly understand what the user is asking for.

From reading your post, though, I'm not entirely certain you have actually bothered to learn UML before you started to slam it. You say:

it just doesn't, and never can, portray all the information that text can

Anybody who knows UML knows that an incredible amount of the work is done in text. Sure, you can create state and activity diagrams for your use cases... or you can just attach textual documentation that is typically easier to create and much smaller than state or activity diagrams. This is what you are typically ENCOURAGED to do for anything but the most convoluted process... and when you have a very convoluted process, even text fails.

There are times that it is very useful to have pictures with circles and arrows and a paragraph on the back of each one.

UML is not the best solution for every development project, for very large projects with lots of developers involved, it can certainly make life easier.

Re:too hard. (1)

Bodrius (191265) | more than 8 years ago | (#15459573)

I think something is missing here:
- Diagrams are useful in design
- Common syntax and standards are useful in technical artifacts (e.g.: code, documents)

Ergo:
- Designing in diagrams using an effective, standard syntax is a joke.

Complaining about UML being insufficient for a complete design is like complaining that CGI is not enough to create a movie. It is both obviously true and obviously besides the point.

Like any decent tool, it serves a particular task for which the alternatives are inferior or more costly.
In this case, you wouldn't want to portray all the information you'd normally write out. You'd want to model specific information that needs to be unambiguous, at some level of abstraction, in a way that is simple to digest.

If you think that writing out in English a non-trivial sequence diagram or a large class structure is a better way to communicate this information, you must be surrounded by people with awesome reading comprehension skills.

UML is immensely useful in designing and communicating at the adequate level of abstraction at a time. And yes, so can be any other diagram convention you want to choose. It's not like relational and statechart diagrams are new.

But if you don't spend resources creating a pet Turing-complete scripting language for each project and then training everyone to use it, why do that with your design language, where the most important bugs are likely to originate from?
Particularly when you need to train not just your team, but your customers, in your design lingo?

More than class skeletons... (1)

oSand (880494) | more than 8 years ago | (#15459780)

One can generate code from OCL constraints and check them at runtime.

Re:too hard. (1)

radtea (464814) | more than 8 years ago | (#15459970)

automatic code generation is even more of a joke (at absolute best, it saves you 5 minutesof typing boilerplate class skeletons).

False.

I have written a series of XML-based code-generators over the past five or six years, and found them increasingly useful. I started out with a big dream: represent an application as a narrative [supersaturated.com] and tie the document structure (DTD) to the program structure in a deep and powerful way.

Like a lot of big dreams, it didn't survive contact with reality (or with other developers). After seeing the framework gutted in a re-write by a developer who needed to "simplify" it to understand it, I re-wrote it for my own use without such grand ambitions but with a lot more practicality.

Propertly done, generated code gives you a lot more than a five-minute savings. It gives you all of your serialization code, which I particularly hate writing, and it gives you some access to a higher-level modelling language. Admittedly XML is not exactly god's gift to data modelling, but it's better than nothing (I actually tried developing an implementation of the code-generator in XSLT, but have subsequently decided that I'd rather chew my own leg off than go there.)

I treat the code generator like a particularly intelligent junior developer--one who is capable of understanding my design and writing code that implements that design and conforms to my company's coding standards and serializes the classes to well-formed XML. This kind of thing is also useful for prototyping, and useful for exploring the conceptual aspects of the design. I will often rev the XML specifications quite a few times before generating any code, simply in the process of getting the representations and relationships right.

A well-written code generator is a valuable tool in any developer's toolkit. Mine is not yet ready for prime time (no documentation to speak of, and I'm too busy with clients right now to write any) but I'm sure there are others out there that are just as good.

Re:too hard. (1)

colinrichardday (768814) | more than 8 years ago | (#15458300)

CASE (Computer Aided/Assisted Software Engineering?)?

Re:too hard. (0)

Anonymous Coward | more than 8 years ago | (#15458415)

That's so 1980. Today we have a much better paradigm, called "model checking".

Re:too hard. (1)

colinrichardday (768814) | more than 8 years ago | (#15458537)

The original poster said that this occurred in the mid-80's. Also, Fran Tarkenton worked at KnowledgeWare, which produced CASE tools.

Re:too hard. (0)

Anonymous Coward | more than 8 years ago | (#15458373)

they hardly supplant the intuition and "art" that is programming

If your code is a product of intuition and "art," you write bad code. Code should be the product of a very rigid and methodical process, not the output of some hippy wanna be typing like a monkey at the keyboard. Your code would never pass muster in areas where reliability, security, and absolute adherence to specification are the norm and not the mere result of months, if not years, of debugging in the field.

Re:too hard. (1)

Doctor Memory (6336) | more than 8 years ago | (#15458455)

We are as far away today or further from tools that can generate applications transcendentally.

True, but the fallout's been useful. Ever used Rational XDE? I see Sun has something similar in the latest Sun Studio 8 Enterprise, but I haven't used it. Basically, it's a round-trip UML modeler: lay out your class diagram, and XDE will generate the code for it. Update the generated skeleton with "real" code, and XDE will update your model from the changes. It's much nicer than trying to do things with Rational Rose -- then again, pulling our your toenails with rust pliers is nicer than trying to do some things in Rose (say, trying to reverse-engineer a subset of your project that references a class that isn't part of the subset you're working on).

Like it or not, UML is the new flow charts, and any tool that makes using it easier ultimately makes it easier for a team to communicate. I would wrap this up succinctly, but I see it's after 1700 on a Friday, so ... beer.

Re:too hard. (1)

jgrahn (181062) | more than 8 years ago | (#15460463)

True, but the fallout's been useful. Ever used Rational XDE? [...] Basically, it's a round-trip UML modeler: lay out your class diagram, and XDE will generate the code for it. Update the generated skeleton with "real" code, and XDE will update your model from the changes. It's much nicer than trying to do things with Rational Rose -- then again, pulling our your toenails with rust pliers is nicer than trying to do some things in Rose [...]

You have a realistic view on Rational Rose and that makes me want to trust you ...

However, since Rational promised that Rose would deliver all that glorious UML round-trip goodness, why should I trust them when they claim that this "XDE" thing delivers it? I've given Rational the benefit of the doubt too many times already.

Re:too hard. (2, Insightful)

deuterium (96874) | more than 8 years ago | (#15458470)

We were pretty much warned our careers were over, automation was here to generate what we as professionals had studied years to create.


I vaguely recall that fad as well. A lot of executives were jazzed about the idea, as they seemed to assume that software was rote and procedural anyway. They viewed programmers as simple translators, not realizing that program code doesn't just facilitate the resulting software, but was the software. Regardless of how many tools you devise to commoditize the basic functions of software, the effort required to actually make it is proportional to the complexity of describing its total functionality. You'll just end up having to specify all of the procedures involved, anyway, whether you do it in code or through some meta-coding tool.
By the same token, code checkers can't know what your intentions are for every variable and class relationship. They can tell you if you generate invalid or null variables, or if a function is orphaned, stuff that is strictly boolean. Beyond mistakes like that, you'll have to tell the checker in explicit manners what to look for, negating the benefit of the tool.
We have time saving techniques already. They're called code libraries, design patterns, and error handling.

Re:too hard. (1)

Coryoth (254751) | more than 8 years ago | (#15459161)

By the same token, code checkers can't know what your intentions are for every variable and class relationship. They can tell you if you generate invalid or null variables, or if a function is orphaned, stuff that is strictly boolean. Beyond mistakes like that, you'll have to tell the checker in explicit manners what to look for, negating the benefit of the tool.

If you state your intentions in a language that the code checker understands (preferrably a language designed to be expressive when it comes to making assertions) then the checker can actually determine a great deal about your code. For instance, look at ESC/Java2 [secure.ucd.ie] . By providing annotations to your Java code it allows the checker to find far more errors than the small amount you suggest. Developing using ESC/Java2 instead of a compiler to "check" your code can be very productive (it checks Java syntax and does type checking as well, so if it passes ESC/Java2 it will definitely compile) - Things get flagged as potential errors early on, often forcing you to add extra annotations to clarify your intentions (which is a good thing! it often helps you actually flesh out your ideas of exactly what you intend), and by the time you're done the odds of code working as you intend is far far higher than you would ever expect otherwise.

The fact that the annotation language that ESC/Java2 uses can also be automatically included into your JavaDoc documentation (making method requirements explicit, and making clear what a method guarantees), and can also be used to automatically build a JUnit testing framework for the code - well that's just icing. Take a look - download the Eclipse plugins here [ksu.edu] and here [ksu.edu] and try it out. I think you'll be remarkably suprised how powerful the error checking is, and how productive the development cycle can be when using it.

Jedidiah.

too hard-Intentions. (0)

Anonymous Coward | more than 8 years ago | (#15459511)

Isn't "coding by intent" an Eiffiel thing?

Re:too hard-Intentions. (1)

Coryoth (254751) | more than 8 years ago | (#15460207)

Certainly Eiffel was the language that first integrated it in as a serious part of the language. These days though you can get more powerful systems for other languages. Eiffel of pre- and post-conditions, but uses them as runtime checks. Extensions to Java and Ada like JML and SPARK allow for much more expressive statements that pre- and post-conditions alone - for instance JML lets you specify which data fields a given method may access, or write to, or under what conditions an exception may be thrown - and there exist strong static checkers such as ESC/Java2 and the SPARK toolset which let you check for possible errors statically (thus checking all possible cases rather than individual test cases). With JML you can write as much or as little as you wish in the way of annotations describing intent.

Jedidiah.

Tools Have Been Successful (1)

raftpeople (844215) | more than 8 years ago | (#15458587)

CASE, code generators, etc. have their place and can be successful. I have created and used these types of tools and continue to enjoy analyzing where they are a good fit.

OLTP business applications tend to be the best fit because there are many many forms/transactions to create and they tend to be relatively simple (mostly data validation, some calculations, database update, etc.). The work that is being automated is not creative problem solving but rather the application of a solution multiple times to similar but not identical entities.

But ultimately the real work is the proper abstraction of the business process and this still requires a human.

AD/Cycle (1)

meburke (736645) | more than 8 years ago | (#15458687)

Yeah, that investigation was called the "AD/Cycle", and Fran Tarkenton was the CEO of Knowledgware at the time.

The thing is, on an abstract level, "Designing Code from Logically-Proven Constructs" (the title of a book by James Martin) makes total sense: If the base elements are logically proven, and if the complex elements are constructed of base elements, then the output will have no un-proven output. However, the design of the programs needs to be at a meta-level to the operation. (Thanks, Goedel!) I could, for instance, design a program to operate your sprinkler system perfectly by using logically-proven elements. I would have to design at a higher level to make allowances for the fact that you live in New Orleans and the sprinkler is now under 4 feet of floodwater.

The key, to me, is getting the best design upstream and having a clear logical system for translating the requirements downstream to an operational level. Criticisms will be leveled about model-driven architecture, UML and even flowcharts, but those are conceptual maps, not code. They are SUPPOSED to be more abstract than the code for clarity. (They are a language, and language is still imprecise. As Korzybsky said, "The map is not the territory.") Aristotelian logic still has its limits, so tools like this are good, but as the article implies, not sufficient at this time.

Mike

Re:too hard. (1)

kfg (145172) | more than 8 years ago | (#15458749)

I could construct many scenarios that logically would be consistent but have no relationship to the desired end of the application. . .

Bingo! In fact, unless you are working at the very cutting edge of science and/or technology, going where no man has gone before, the really hard part of program design is figuring out just what the heck that desired end really is.

The rest is just a programming exercise.

Computers may be able to prove a program correct, logically consistent and even generate algorithms, but these have nothing to do with design. They're the mere implimentation of design.

A computer will never be able to design a program until you can clearly and explicitly tell it what you want the program to do, at which point, you have already designed the program.

When did "design" become a dirty word in programming anyway? And how about giving programmers a solid foundation in analytical logic?

Show of hands. How many of you out there who claim that you learned everything you ever needed to know about programming from the web spent the first few months, at least, ignoring code and studying design?

Not meaning to unduly ruffle any feathers, really, but maybe that's why so much software sucks.

KFG

Re:too hard. (0)

Anonymous Coward | more than 8 years ago | (#15460557)

I would disagree, it is possible to have the computer design a program. Create a fitness function which determines the fitness of the program. This function may have user input, so that stuff like UI makes sense, etc.

You might say that a fitness function is equivalent to an actual design, but i disagree. I've seen computers come up with quite novel design solutions.

After a fitness function is produced, the computer can use genetic algorithms to evolve the program.

I don't think that GAs will replace real programmers any time soon, but I think they are underused.

Re:too hard. (0)

Anonymous Coward | more than 8 years ago | (#15458857)

Basically, because at some point, people realised that all you're doing is programming in a higher level language.

That's it. No matter how you dress it up, all you're doing is programming in a more abstract language. The hard part isn't the language any more, it's the actual hard part of taking the real world problem and breaking it down so a computer can understand.

Go read "No Silver Bullet [wikipedia.org] ", and "No Silver Bullet, Refired."

The other problem is that, IMHO, all these approaches only deal with 'internal' bugs. Often, though, the bug in a program is how it interacts with something else - eg, the GUI toolkit. I suspect that if the programmer knew enough about the external library to write sensible tests / high level validation, they would know enough to avoid the bugs to start with.

"That's Incredible!" (1)

onetruedabe (116148) | more than 8 years ago | (#15459104)

I got a football signed by Fran Tarkenton (he was CEO of one of the startup companies fielding a software generating workbench).

"That's Incredible!"

(Boy am I showing my age...)

--
It's Better to Have It and Not Need It ... than Need It and Not Have It.

Re:"That's Incredible!" (0)

Anonymous Coward | more than 8 years ago | (#15459509)

Ah, I remember that show. It sucked.

Re:"That's Incredible!" (1)

zymurgy_cat (627260) | more than 8 years ago | (#15459979)

Not as much as "Real People".

Dijkstra Sceptical too. (1)

Chopo (978903) | more than 8 years ago | (#15459542)

Program testing can be used to show the presence of bugs, but never to show their absence! - Edsger Wybe Dijkstra It remains an art.

Re:Dijkstra Sceptical too. (1, Interesting)

Anonymous Coward | more than 8 years ago | (#15460153)

Maybe against sofware testing afterwards programming, but FOR formal verfication

From the 1970s, Dijkstra's chief interest was formal verification. The prevailing opinion at the time was that one should first write a program and then provide a mathematical proof of correctness. Dijkstra objected that the resulting proofs are long and cumbersome, and that the proof gives no insight as to how the program was developed. An alternative method is program derivation, to "develop proof and program hand in hand". One starts with a mathematical specification of what a program is supposed to do and applies mathematical transformations to the specification until it is turned into a program that can be executed. The resulting program is then known to be correct by construction. Much of Dijkstra's later work concerns ways to streamline mathematical argument. In a 2001 interview, he stated a desire for "elegance," whereby the correct approach would be to process thoughts mentally, rather than attempt to render them until they are complete. The analogy he made was to contrast the compositional approaches of Mozart and Beethoven.

From WP [wikipedia.org]

Re:too hard. (1)

radtea (464814) | more than 8 years ago | (#15459995)

I was skeptical then, I'm skeptical now. Tools like the ones described are useful, but they're not foolproof, and they hardly supplant the intuition and "art" that is programming.

I once studied "Z", a specification language that was supposed to eventually be able to feed automatic correctness checkers. I realized how bad the language was when one of the canonical examples required that the design of the code itself be contaminated by the constraints of the specification language.

For some very narrowly defined problem domains I'm sure tools like this are useful, but I'm equally sure that the specification of a useful, general-purpose tool of this type would grow in complexity until it finally collapsed under its own weight. The final specification of an effective design evaluation tool would read, "Entity with intelligence, understanding, taste and good judgement."

Re:too hard. (0)

Anonymous Coward | more than 8 years ago | (#15460364)

Coverity has a very successful business line of this http://coverity.com/ [coverity.com] and is spun off from research at Stanford that I believe has found the most bugs in linux.

Swedish cabinet facing trouble over Pirate Bay? (-1, Offtopic)

Anonymous Coward | more than 8 years ago | (#15458186)

Bugs (1)

Umbral Blot (737704) | more than 8 years ago | (#15458188)

This may be great for catching some bugs, but I think the majority of problems within software are not from "conflicting" instructions, they come from the program doing to wrong thing (i.e. not what we wanted) or simply performing an illegal opearation in the process of getting the correct results. Neither of these cases is a logical inconsistancy. Now maybe if we all programmed in Prolog this would be more useful ...

Re:Bugs (0)

Anonymous Coward | more than 8 years ago | (#15458383)

This is a ridiculous comment, do you really blame the program for your bad programming? Error reporting is part of proper programming.

Well, yeah... (1)

jd (1658) | more than 8 years ago | (#15458671)

This is why you really want to state clearly the assumptions made on entry to every function (the preconditions) and the consequences that logically follow from applying what you think the function does to the data passed in, given those preconditions (the postconditions).


You then want the automated tool to map out the paths through the code, substituting every variable for an equivalent expression consisting of the inputs that would go into producing that variable, for each path. (This works for loops, so long as you can decompose the condition for the loop.)


Finally, the tool would need to determine one or more ranges of inputs for which at least one variable CANNOT meet the preconditions or postconditions set, in some path, but which are not expressly prohibited from being input at that point.


The tool would then be able to show you the path that is buggy (but not where - the results may be valid but incorrect at ANY point prior to the invalid data showing up). The programmer could then decide if the input should be allowed, or use the information about the path to actually debug the code.


This is not a foolproof method:


  • It only catches situations where the tool can force an impossible result.
  • The less decomposable the data is the less accurate it would be. (eg: If you have a random-length loop, but the loop variable is a byte in size, the tool would be forced to assume the loop can range from 0 to 255, even if the random function was much more limited.)
  • It's only as accurate as your preconditions and postconditions. (If you miss out a condition, no automated tool could ever test for it. If the condition is inaccurate, you'll get false positives and/or false negatives.)
  • Most programs have, in effect, an infinite number of possible paths, once totally unrolled. You can't test them all in the general case. Well, not unless you're very bored and immortal. This means this method cannot prove a program bug-free, only that it contains a bug within the number of paths checked. (You CAN test any number of paths that have no resultant impact on the state of the program, because once you've proved one cycle correct for each such path, you've proved every possible combination of every possible complexity for all such paths.)


Decomposition of unrolled paths is the ONLY way to test for unexpected side-effects, self-modifying code and other inconsistant systems. Because it is a limited approach, however, such code should really be avoided where possible.


Decomposition without unrolling (a common form of static testing) is useless in any case where there is any kind of non-fatal error that slowly accumulates, as everything is generally going to be valid on a single pass through. Static testing without decomposition (an even more common form of static testing) is unlikely to catch most non-fatal bugs.


Slow poisoning through an accumulation of non-fatal bugs that - in and of themselves - don't stray too far from what's expected are probably the most common.

IEEE CS:ToSE talks about this in their last issue (2, Interesting)

cavemanf16 (303184) | more than 8 years ago | (#15458196)

For members of IEEE with a subscription to IEEE Computer Society's Transactions on Software Engineering, the last issue (April) has a very interesting article related to this stuff titled: Interactive Fault Localization Techniques in a Spreadsheet Environment [computer.org] . Basically, the article explains how they have worked to develop and test techniques that allow "end-user programmers" (people who create formulas in spreadsheets and such) to use automated fault localization testing tools to help debug their "applications" (spreadsheets) at runtime. Pretty interesting stuff that they found in their analysis. (It's easier for you to just go read it than for me to attempt to summarize it at the end of my work week. ;)

Duh! Must be a memeber to read article! (1)

meburke (736645) | more than 8 years ago | (#15458458)

You must have been tired when you posted: The article you linked to is only readable by people who have a current membership. Too bad.

Mike

Re:Duh! Must be a memeber to read article! (0)

Anonymous Coward | more than 8 years ago | (#15458770)

Duh! You must have been tired when you read it for it says, For members of IEEE with a subscription to IEEE Computer Society's Transactions on Software Engineering... Too bad.

Re:Duh! Must be a memeber to read article! (1)

meburke (736645) | more than 8 years ago | (#15458917)

Yup---definitely tired...I missed it completely. Mike

I see one problem with this. (2, Insightful)

Dragoonmac (929292) | more than 8 years ago | (#15458217)

From what I'm reading it looks like these programs preform all sorts of different executions, thats great and all, but they probably dont behve like real people do. The average user isn't going to create a file and then (in the middle of that) start running the delete file interface. Also I doubt these tests include other common user issues (like clicking the same function over and over again if it doesn't respond immedietly). Maybe I'm just not understanding what these do... but if I'm even half right, real world tests are the way to go.

Re:I see one problem with this. (1)

DragonWriter (970822) | more than 8 years ago | (#15458284)

From what I'm reading it looks like these programs preform all sorts of different executions, thats great and all, but they probably dont behve like real people do. The average user isn't going to create a file and then (in the middle of that) start running the delete file interface. Also I doubt these tests include other common user issues (like clicking the same function over and over again if it doesn't respond immedietly). Maybe I'm just not understanding what these do... but if I'm even half right, real world tests are the way to go.
Why is it either/or? Automated testing ideally allows you to eliminate whole classes of problems before you get to testing with human users -- which should make your final testing, with real world conditions and human users, more productive.

Thank you! (1, Insightful)

Anonymous Coward | more than 8 years ago | (#15459922)

Someone who doesn't see it as an all or nothing proposition.
Tools that make sure programs are self-consistent are good!
What's the point of having testing and real world trials if you're programming doesn't even agree with itself?

Re:I see one problem with this. (0)

Anonymous Coward | more than 8 years ago | (#15458350)

actually that's the kind of things that program should also be testing for. All paths should be exercised and should be provided coverage. The things that you don't expect are the things that you don't account for.

Re:I see one problem with this. (1)

MBCook (132727) | more than 8 years ago | (#15459409)

But it's still important to test those kind of things. A user MAY do that. Apple used to have a way of testing things that was rather ingenious. They used it to get rid of the bugs in the original Mac OS. Check out the story at Folklore.org [folklore.org] .

Algorythm (sp?) vs. human error (0)

Anonymous Coward | more than 8 years ago | (#15458227)

There are many ways to translate an algorithm into code, as there are ways to express ideas in English (or any other "natural") langauges. Would this do anything to eliminate/reduce the inconsistencies/errors in the code that are meant to implement algorhythms considered/proved correct?

LWN - Lock Checker (3, Interesting)

MBCook (132727) | more than 8 years ago | (#15458230)

LWN [lwn.net] just did a piece on a lock validator that just went into the kernel. The article [lwn.net] is currently subscriber only and won't be visible to non-subscribers until next Tuesday, IIRC.

It was a very interesting piece. It talked about the problems of locking (more locks makes deadlocks easier, but they get harder to track down) and the way the code went about finding problems. Basically it remember when any lock was taken or released, which locks were open before that, etc. Through this it can produce warnings. For example if lock B needs lock A, but there is a situation where lock B is taken without A being taken it will flag that.

The article has MUCH better descriptions. But through the use of this the software can find locking bugs that may not be triggered on a normal system under normal loads. Here is summary bit:

"So, even though a particular deadlock might only happen as the result of unfortunate timing caused by a specific combination of strange hardware, a rare set of configuration options, 220V power, a slightly flaky video controller, Mars transiting through Leo, an old version of gcc, an application which severely stresses the system (yum, say), and an especially bad Darl McBride hair day, the validator has a good chance of catching it. So this code should result in a whole class of bugs being eliminated from the kernel code base; that can only be a good thing."

It was a piece of code from Ingo Molnar, you should be able to find it on the kernel mailing-list and read about it.

Kudos, by the way, to LWN for the great article.

Re:LWN - Lock Checker (1)

int19h (156487) | more than 8 years ago | (#15459731)

> The article [lwn.net] is currently subscriber only and won't be visible to non-subscribers until next Tuesday, IIRC.

No problem, just right click the login-box and select "Login with BugMeNot".

You need Firefox and the BugMeNot-extension, though. Firefox can be found in your favorite repository or at http://www.mozilla.com/firefox/ [mozilla.com] .

The BugMeNot extension is here: http://extensions.roachfiend.com/bugmenot.xpi [roachfiend.com]

Re:LWN - Lock Checker (1)

MBCook (132727) | more than 8 years ago | (#15460057)

You have to PAY to get a LWN subscription. I'm not talking about a general login, I'm talking about a PAYING account. Unless someone PAID and then put the login up on BugMeNot, that won't work.

SECOND, how kind of you to encourage people to steal from such a great website. LWN is the only one I subscribe to because I like them so much. They aren't a "pay us or you won't see anything" site (like most science journals). They aren't a "pay us and we won't put large flash ads between each page" site (their only ad is a Google ad on the left side). They simply let subscribers see content a week before the general public.

They are nice guys and do excellent work. I've been supporting them because I don't want to see them die.

Thanks for spitting on that.

Re:LWN - Lock Checker (1)

MBCook (132727) | more than 8 years ago | (#15460080)

OMG.

It's a real subscriber account. It's the HP group account.

Please, DON'T use that login. Support LWN.

I've reported the link to both BugMeNot and LWN.

software snake oil (3, Insightful)

penguin-collective (932038) | more than 8 years ago | (#15458241)

None of those tools have ever been demonstrated to be cost-effective means of making software more dependable. It's an article of faith that adding a complex notation and another complex set of tools to the development process makes the product any better.

Re:software snake oil (1)

Not_Wiggins (686627) | more than 8 years ago | (#15458705)

Spot on... we all know the proper way to ensure high-quality software design is with a heavy Waterfall [waterfall2006.com] methodology.
Ooo... and throw in lots of beurocratic layers in your organization, too!
Lord knows software can't be high-quality without at least 10 separate management rubber-stamps on it. ;)

Re:software snake oil (1)

l33td00d42 (873726) | more than 8 years ago | (#15458921)

None of those tools have ever been demonstrated to be cost-effective means of making software more dependable.

Your choice of word ordering is interesting. The article is not about making software more dependable; it's about making more dependable software!

Re:software snake oil (1)

babble123 (863258) | more than 8 years ago | (#15459522)

None of those tools have ever been demonstrated to be cost-effective means of making software more dependable. It's an article of faith that adding a complex notation and another complex set of tools to the development process makes the product any better.

You're absolutely right. Unfortunately, that also applies to pretty much every other software engineering tool, including ones that are currently being used. There are very few tools and techniques that have been experimentally validated (inspections are a notable exception).

Finding Nemo Architecture (5, Interesting)

Anonymous Coward | more than 8 years ago | (#15458287)

This reminds me of my previous job. One day the owner of the company came up with a brilliant idea. He had just watched the movie "Finding Nemo" and asked me, "have you ever seen finding nemo? You know those little silver fish? I think we should design a system based on those little silver fish. If we get enough small components they can be combined into any piece of software. Eventually we wouldn't need any more components and thus no more software developers. All of our software would be made by sales guys who could just combine these components into any software we need." I remember thinking to myself that we could just start with quarks and we could build everything in the universe. But I didn't say anything and was just happy to not be chosen to be on the team creating the silver fish.

6 years later, dozens of programmers, and millions of dollars, the finding nemo architecture has been a bust. The owner of the company refuses to give up on the idea. They currently have created components of "and" and "or" gates and use "wires" to put them together. It reminds me of entry level electrical engineering. Back before they tell you that writing software on flash is usually easier and cheaper than wiring together dozens of IC's. In any case, I eventually did get sucked into the project and promptly left the company.

Re:Finding Nemo Architecture (1)

AuMatar (183847) | more than 8 years ago | (#15458348)

Well, he didn't totally have the wrong idea, he just took it too far and went the wrong way. Reusable components are good. If you can just download a library that can do X, write a bit of glue code, and be done your productivity has skyrocketed. But there will always be glue code to write. And the idea isn't to write every component you'll ever need first, its to write/find libraries as you need them, being careful to write them in a reusable fashion. You'll never have your sales guys as your main coders, but if your apps are all similar you can greatly cut your turnaround time on a new app.

Re:Finding Nemo Architecture (4, Interesting)

TapeCutter (624760) | more than 8 years ago | (#15459981)

"If you can just download a library that can do X, write a bit of glue code, and be done your productivity has skyrocketed."

When I worked for IBM in the 90's the CEO made the pronouncement: "All code has been written, it just needs to be managed". We all thought he was clueless, nevertheless here I am 10yrs later writing "glue code" for somebody else and IBM is still the largest "software as a service" company on the planet.

Re:Finding Nemo Architecture (1)

pete6677 (681676) | more than 8 years ago | (#15458432)

That story showcases the weakest component in the software design process: humans. In this case, the owner of the company.

Re:Finding Nemo Architecture (0)

Anonymous Coward | more than 8 years ago | (#15458484)

Six years ago? Your boss must have been watching a preview. http://www.imdb.com/title/tt0266543/ [imdb.com]

Re:Finding Nemo Architecture (1)

boomgopher (627124) | more than 8 years ago | (#15458565)

I don't think the idea is bad, but your boss/company seems to equate the "magic" of schools of fish to their small size, whereas it's their capability to self-organize using simple rules. Ala the game of life, cellular automata, etc.

Re:Finding Nemo Architecture (2, Insightful)

140Mandak262Jamuna (970587) | more than 8 years ago | (#15459055)

It is six years since Finding Nemo was released? Looks like it was yesterday I saw the movie. [quick googling "finding nemo year release"] 2003. How long would it have taken the "tool" to find this contradiction in your posting?

Re:Finding Nemo Architecture (1)

Sleuth (19262) | more than 8 years ago | (#15459141)

Oooh, that's an easy one. A project like that feels twice as long, at least, than it actually is!

Re:Finding Nemo Architecture (2, Interesting)

jvkjvk (102057) | more than 8 years ago | (#15459154)

Fish school through autonomous intaraction with the state of their observable surroundings. Most times, the local heuristic on movement is not very linear, and the swarm folds on itself, moves randomly, changes volume and surface topology.

When this type of intelligence is directeted toward some more concrete goal, such as getting away from a predator (for fish), it turns out that the average path can be near optimal if the proper heuristics can be chosen.

http://en.wikipedia.org/wiki/Swarm_intelligence [wikipedia.org]

Re:Finding Nemo Architecture (2, Insightful)

russellh (547685) | more than 8 years ago | (#15459481)

Yep.

If we get enough small components they can be combined into any piece of software. Eventually we wouldn't need any more components and thus no more software developers.

the key is that phrase "can be combined" although my second pick is "eventually". your finding nemo system will have to be self-organizing because is too vast to have organization imposed from without. You already have that kind of system today anyway. So if you have a self organizing system, two questions are a) how does it arise and b) how do you get it to do what you want. The real-world analogy for this way of thinking is the garden and the gardener. you're thinking wouldn't it be real cool if we had "cells" which you could stack on top of one another and you could have any kind of plant you want, whereas what you want is to be the gardener instead and let the software plants grow by themselves (think of the order of magnitude in scale between a group of cells and the entire garden). And so I think your "eventually" is a really, really, really long time.

Re:Finding Nemo Architecture (0)

Anonymous Coward | more than 8 years ago | (#15459912)

If we get enough small components they can be combined into any piece of software.

They're called controls.

Snakeoil/Panacea (1, Interesting)

Anonymous Coward | more than 8 years ago | (#15458334)

Yet another article about a supposed solution to software quality problems by an author who just happens to have such a solution to sell you.

Software design and coding isn't easy, and beyond the fundamentals (code coverage tools, unit testing frameworks, etc.), I have yet to see automation tools that increase software quality in any real way.

Any person who knows anything about software quality knows that the answer is not to use "a tool that explores billions of possible executions of the system, looking for unusual conditions that would cause it to behave in an unexpected way."

For one thing, you need to approach at least the level of human intelligence to understand what is intended by a software design or code module, and often even humans don't understand it. If you don't understand what was expected, how in the hell can you look for unexpected conditions??

In short, if this is truly what the author is proposing, i.e., brute force checking of design/code, I would have to say the author just doesn't get it.

In gaming, it'd work like this.. (1)

Channard (693317) | more than 8 years ago | (#15458353)

Now a computer can discover the flaws in the design of a piece software, and advise the developers of them. Who, if they're in any way involved in design of games, will promptly ignore them and release a post-release patch to fix the issues they knew were there anyway. But hang on.. just how will you check for inconsistencies in the design of the analysis tools?

Re:In gaming, it'd work like this.. (1)

l33td00d42 (873726) | more than 8 years ago | (#15458988)

i bet you were the kid who would keep asking "... and why is that true?", never taking anything on faith/obviousness.

Re:In gaming, it'd work like this.. (1)

muridae (966931) | more than 8 years ago | (#15460048)

You run the checking software on itself. In one instance, it finds an error. In another, it just solves the halting problem and tells you that your program works fine.

These tools have very limited applicability (2, Insightful)

JurassicPizza (972175) | more than 8 years ago | (#15458354)

These types of algorithmic testing tools are useful for small, truly critical functionality that has to work perfectly. It's not cost effective to try to model typical complex software in a manner that supports testing as described in the article. Most programming is not about designing the next great single algorithm, it's about integrating data, interacting with users, and providing all the logic to handle the myriad special cases that make up user requirements. Rarely is such a testing tool going to cover all the possibilities without a gargantuan effort to model the software -- which effort will most likely not be able to keep up with the actual development anyway. These tools won't be widely accepted until they can automatically read source code and create a software's model without programmer input.

Re:These tools have very limited applicability (1)

l33td00d42 (873726) | more than 8 years ago | (#15459070)

Rarely is such a testing tool going to cover all the possibilities without a gargantuan effort to model the software -- which effort will most likely not be able to keep up with the actual development anyway. These tools won't be widely accepted until they can automatically read source code and create a software's model without programmer input.

did you RTFA? design first, code later.

there are other tools for analyzing code after it's written. this article is not about those, but there's lots of work out there on automatic reverse engineering and model extraction.

Re:These tools have very limited applicability (2, Interesting)

JurassicPizza (972175) | more than 8 years ago | (#15460150)

Yes, the magazine actually came out three weeks ago. "Design first" at the level of detail required for this type of testing to work (complete pseudocode) is pretty much never going to happen with a major software system. Perhaps with small critical algorithms, yes, or where the risk/reward of that level of design is warranted. Most software is not going to qualify.

Crutch for bad developers (3, Insightful)

cryptomancer (158526) | more than 8 years ago | (#15458363)

Sounds like some producer wants a magic-bullet program to replace some bad-performing designer. Even in the case of a 'useful' tool to apply to projects, this is likely to become an excuse for when an inconsistency is found later on by QA- "the program said it was good!"

It's not going to find everything, let alone fix it. See Turing: the halting problem.

Re:Crutch for bad developers (1)

DragonWriter (970822) | more than 8 years ago | (#15458447)

It's not going to find everything, let alone fix it.
OTOH, it may find plenty of things that would otherwise be missed. Of course, people will misuse it some times and blame the results on the software. That's not a reason to think the software is a bad idea -- its not like not having automated validation software will stop people from doing inadequate QA, or poor design.

Detecting too late (2, Interesting)

Doctor Memory (6336) | more than 8 years ago | (#15458403)

If they're checking the software design for inconsistencies, then they're too late. What is needed is some way to formally specify user requirements, so that they can be checked for completeness and consistency. Use cases are nice, but they're not sufficiently rigorous to capture absolutely all the requirements. I know there have been some schemes tossed around for requirements validation, but none that I've seen have really been general-purpose enough for the average project.

Re:Detecting too late (1)

Peter La Casse (3992) | more than 8 years ago | (#15458793)

There are formal methods such as Z Notation [wikipedia.org] for specifying the behavior of a general-purpose (not even necessarily computer-based) system in a provable, checkable way. Unfortunately, precisely specifying the behavior of a nontrivial system is a lot of work, as is learning how to use a formal method in the first place. Theoretically, someday tools will make it easier, but specifying the intended behavior ahead of time will still be time-consuming and often difficult to justify to management.

Other posters have it right: the biggest benefit will come from tools that import existing code and turn it into a formal specification of some kind (which can then be checked for logical inconsistencies, used to produce diagrams, and other fun things.) I could see a programmer using source code as the interface to a formal methods based system: write a code skeleton, import it into the tool to generate the initial model, tweak the skeleton, look at the model again, etc. With that kind of tool, it doesn't even really matter which formal methods approach is used; that's an implementation issue that could be abstracted away (or implemented via plugins.)

Compile-time assertions (1)

Latent Heat (558884) | more than 8 years ago | (#15459797)

Maybe what is needed is a dual source code -- one source code specifying what is supposed to happen (specification) and another source code specifying how it is supposed to happen (algorithm). If you had a complete specification, you would think you could automatically generate the algorithm, but in that case, your specification would be a form of programming and would have to be airtight.

But why does the specification have to be a one-to-one mapping into the algorithm? Couldn't the specification be of the form of an assertion? An assertion is a form a runtime test case, and test cases are never exhaustive with regards to what a program does -- otherwise you could form test cases as the specification and automatically generate code from that, although there seems to be a programming style of generate the test cases first and have code monkeys pound on typewriters until they pass the test cases.

So instead of a traditional assertion -- a runtime test case written into the code -- you have a compile time assertion. The assertion could be that for int i, int j, at a certain point in the code i = j, and the result could be true, false, or undecidable. There is a little of that built into compilers today -- warning messages about assigning a value to a variable and never using it, catching uninitialized variables (some of the time), and the like.

PHP security checker (3, Funny)

roger6106 (847020) | more than 8 years ago | (#15458409)

Making a tool to check most programs for errors sounds extremely complicated, but wouldn't it be possible to make a more simple tool that checks the security of a PHP/MySQL website?

Re:PHP security checker (1)

cablepokerface (718716) | more than 8 years ago | (#15458788)

Making a tool to check most programs for errors sounds extremely complicated, but wouldn't it be possible to make a more simple tool that checks the security of a PHP/MySQL website?

If this would be a thread about car safety, you would be saying something along the lines of "It's nice that cars are safe and all, but can I have an apple?"

Good Software Design (4, Insightful)

KidSock (150684) | more than 8 years ago | (#15458413)

A good design correctly models the concept of what it is you're trying to achive with the program. Ultimately this means the programming interfaces (APIs) for each concept are correct [1]. Don't design interfaces around procedures. Don't design interfaces around the physical world. Design to *concepts* and *ideas*. This is superior because you will never discover at a later time that the code is fundamentally flawed and needs to be totally re-written. If the interface correctly models the concept, by definition, it CANNOT be wrong. If it is wrong then you simply didn't understand the concept well enough or you failed to translate that concept into a suitable interface and you just need to think more and type less. If you do get things right you'll find that major peices dovetail together perfectly [2]. The implementation can be wrong and may need to be re-written but if the interface correctly represents the concept the re-write will be localized to one library or part of a library. That is a much more straight forward matter than using a bad design and finding half way through a project that the required changes transcend the whole system.

And thus you cannot validate a design because that would require representing a concept and determining if an interface suitably models it. That is HARD. If that were possible you would effectively have a thinking, rationalizing, brain (Artifical Intelligence) in which case you wouldn't be dorking around with validating programs, you would be dynamically generating them.

[1] Frequently people advocate that interfaces are "well defined". That just means there are no holes in the logic of it's use. Personally I think a well defined interface is useless if it does not correctly model a concept. You can always go back and fill in the holes later.
[2] Although this is also when you discover that you didn't get the concept right and need to adjust the interface (hopefully not by much)

Re:Good Software Design (2, Insightful)

theMAGE (51991) | more than 8 years ago | (#15459639)

A well defined interface means that if you build 1 million holes in a plank and I deliver 1 million pegs, when they "meet" they fit.

A square hole and a round hole, one 1 inch in diameter and one 1 foot wide, all of them model the concept, but are utterly useless.

You can always go back and fill in the holes later.

No you cannot, otherwise we would all use Dvorak keyboards, not this stupid Qwerty. And we would have had HDTV 15 years ago. And...

URLs in an image? (0, Troll)

SanityInAnarchy (655584) | more than 8 years ago | (#15458444)

Wow. It's indeed a table, even a good target for implementation as an HTML table. It has links in it, which point to places to look at software. And yet it's presented as an image, not even an image map, so despite already being on a website, we have no choice but to type them in!

If this is reliable, I don't mind unreliability, but at least let me copy and paste!

Static code analysis? (0)

Anonymous Coward | more than 8 years ago | (#15458657)

Isn't this just glorified static code analysis [wikipedia.org] ?

There are tons of static analysis programs out there that look for everything from errors to memory leaks to security weaknesses to gathering code quality metrics to conformance to stylistic standards.

For example I use PMD [sourceforge.net] to help tone up my Java code already. There are even fancier software packages that claim to do more interesting things, but it is all just static analysis.

There's a ton of similar projects [sourceforge.net] listed on PMD's site alone. In fact Eclipse itself employs a lot of static analysis checks to give you those helpful hints and warnings that javac would not catch on its own. And that is just for Java and just for detecting common coding errors. There are whole other categories of these applications that take a higher level look at the architecture.

These kinds of tools are quite common and effective at cutting down on errors when used properly. They can also speed development by avoiding troublesome pitfalls that can take hours or days to debug. Or worse, cause the failure of the project. Just reading through the existing checks these kinds of programs make can educate you on best practices to help prevent writing bad code in the first place.

And before everyone goes "oh yeah, we used that once, pain in the ass," you should read the top best practice [sourceforge.net] on PMD's website. Choose the rules that are right for you. Start slow. Don't turn everything on just to watch thousands of pages of errors fly by when you're only interested in 20 that really matter.

Re:Static code analysis? (1)

l33td00d42 (873726) | more than 8 years ago | (#15459125)

Isn't this just glorified static code analysis?

no. RTFA.

if you already did, then you're just dumb.

Firstk 4ost! (-1, Offtopic)

Anonymous Coward | more than 8 years ago | (#15458716)

want them thEre. [goat.cx]

Nobody can grasp the idea??? (0)

Anonymous Coward | more than 8 years ago | (#15458760)

I work in the field of formal verification. I've seen all sort of comments here, mainly of the kind "hey, I know what to do, and these guys don't", but nobody, really nobody, wondered "maybe there is some truth in what all these people do"? Come on! Nobody expects to check for errors the whole internet explorer or firefox :) But if you have a communication protocol which is _supposed_ to be secure, the only way to know that it _is_ secure is to explore all of its states. And, also... do you remember when they said that ASM was the way to go and C could get you nowhere? Come on, before judging learn a little more about motivations of people working for a purpose. Yes, quality of software is one thing, and _maybe_ stability or robusteness are different things (well I don't think so), but are they incompatible? Would more robust or stable software improve your life? Don't program crash for idiot reasons? Don't you think that if they'd been verified, they would not crash?

Re:Nobody can grasp the idea??? (1)

RobertLTux (260313) | more than 8 years ago | (#15459382)

and i think three things would prevent a lot of crashes

1 have your program check the the platform is sane before it installs.
(min os version enough memory enough drive space yadda)

2 clean any input more than MAFIA money is (if a form will choke if more than 30 characters are input in a field then drop from character 29)

3 race tickets anybody???

Generic armchair quarterback sacking (1)

Orrin Bloquy (898571) | more than 8 years ago | (#15459022)

I actually read this article last week, unlike most of the people who've responded so far. The principle behind this concept is reasonably sound, except that the example they've given in it (seating Montagues and Capulets at Romeo & Juliet's reception) requires you to understand every unspoken assumption to make the tester work properly.

Jackson doesn't claim it'll find everything. What he says is that it carefully synthesizes the two previous approaches to software testing, reducing the amount of time-expensive calculation in the process thereby making it a more viable method.

The problems .. (1)

hisstory student (745582) | more than 8 years ago | (#15459164)

.. remain the same: describing the problem in the first place. Alloy (or any other set of design programs) can only analyze the information that it's provided. It may be able to flag problems that would have been missed by a human analyst, but it can't possibly deal with real world systems which will invariably produce conditions that weren't considered in the first place. Patching never produces a system as reliable as one that might have been described thoroughly in the first place (a practical impossibility).

Why is this billed as software help? (3, Interesting)

lilnobody (148653) | more than 8 years ago | (#15459191)

The article sounded good, so I went to the alloy site. Having just read through the tutorials, and some of the docs, I can't imagine what possible use this software could have in the majority of software development.

It's basically a nifty, graphical declarative programming language. Anyone familiar with Prolog and set theory will breeze through the docs, only to ask "Why?" at the end.

One of the tutorials, for example, is a way to get Alloy to create a set of actions for the river crossing problem, and list them. Thus, alloy has saved the poor farmer's chicken. It's actually quite a cool set of toys for set theory, and it generates all possible permutations of a system with rules and facts based on how many total entities there are in the system, and checks the system for consistency. There are doubtless uses for this, but software development is, at the moment, not one of them.

The other tutorial is about how to set up the concept of a file system. The tutorial sets up a false assertion (assertion = something for Alloy to test), which fails. Here is the reasoning, with summary to follow for disinterested:

Why can this happen? First, let's note that both delete functions cause the rows of the contents relation in the post-state to be a subset of the rows in the pre-state. And we know the FileSystem appended facts "objects = root.*contents" and "contents in objects->objects" constrains the root of the file system to be the root of the contents relation. So if the post-state has a non-empty contents relation, it will have the same root as the pre-state. However if delete function causes the post-state to have an empty contents relation, then the root is free to change arbitrarily, to any directory available. Bet you didn't see that coming. Good thing we wrote a model!

Basically this says that in a 2-node scenario, i.e. a root directory with one subdirectory, they delete the subdirectory with their delete function. The way they defined the delete function basically meant that the 'deleted' node could, in theory, be the root of the file system after the deletion operation occured, since there was no constraint on which node of the file system was root after the change. They basically said under these constraints, it was possible to define a 'delete' function that deletes the subdirectory in a 2-node filesystem and then makes that same subdirectory the root of the filesystem.

Good thing we built a model, indeed! A bug in the programming of your model is by no means a valid use for spending a significant amount of effort modeling a concept in set theory. The best part is that all of your effort amounts to mental masturbation--there are no tools for turning this into a programming contract, test cases, or anything. Some projects are in the works in the links area, but they aren't there yet, and only for Java. I don't see how the amount of effort that would be required to model a large scale, realistic project in this obtuse set notation would be worth it for absolutely no concrete programming payoff. Writing HR's latest payroll widget, or even their entire payroll system, is just not going to get any benefit from this.

All that aside, it's concievable that this sort of model programming could find use in complicated systems in which high reliability is paramount. The usual suspects, such as satellites, space, deep sea robots or whatever come to mind--this system could prove, for example, that a given system for firmware upgrades cannot leave a robot in mars in an inoperable state, unable to download new, non-buggy firmware.

But it still can't prove the implementation works. *slaps forehead*

nobody

Won't help (1)

bill_kress (99356) | more than 8 years ago | (#15459298)

Although you should design at this level, many problems hit LONG before design. The big problems I've seen have been in the analysis stage where you gather customer requirements and translate them into a very detailed requirements document.

If your non-trivial project lacks such a document, it will probably fail.

The only way to overcome a lack of requirements is to have a heroic effort by one or more engineers, and even then you end up with many of the same problems.

The problems will stem from missing a few requirements in the up-front design and therefore requires patches and modifications the code was never designed to handle.

As software engineers we MUST try to examine our problem space and come up with the most generic solution possible--the fewest screens, procedures, interfaces and objects to handle the widest variety of problems.

This means that when we have the design "Locked in" we have to make any "New" problems fit the design, often molding the desired presentation or functionality just a little. If your marketing research team did their jobs (I've NEVER seen this happen), you won't run into any problems that don't fit the design. Again, never happens.

When the problem is just a little to, umm, chunky to mold to our fit our current design, we have to modify the design. This is where all the flakeyness enters the process. After a few of these, you have to throw away your design and start over. Nobody ever does that, so the project fails.

Yeah, it's hard to come up with a detailed (enough) set of requirements--Really Hard--and apparently very few marketeers are up to the task. Often engineers help, but even then, this is a HARD TASK. It can take years to properly develop a set of requirements, and it cannot be done properly in under a month.

If a company is dedicated to this and then enforces a few simple procedures (like requiring automated testing of every requirement) your project will almost certainly not fail.

I'm not saying that a good software design isn't necessary (I disagree with the XP crowd there), but you could succeed without the kind of architecture they are talking about in this article. On the other hand, no architecture can make up for a lack of requirements.

Software Design Should Be Like Hardware Design (1)

MOBE2001 (263700) | more than 8 years ago | (#15459501)

FTA: More recently, researchers have adopted a very different approach, one that harnesses the power of today's faster processors to test every possible scenario. This method, known as model checking, is now used extensively to verify integrated-circuit designs.

The problem with this is that algorithmic software does not work like ICs. The only way to solve the crisis is to abandon the algorithmic software model and embrace a non-algorithmic, signal-based, synchronous model. This is the model used in hardware design. It all started with lady Ada and her table of instructions for Babbage's analytical engine but the writing is on the wall. 150 years of algorithmic computing has run its course and it's now time for a change. We must radically change our way of programming and we must do so at the fundamental level. We must design software like we design hardware. We must also redesign our cental processors so that they are optimized for the new paradigm, not for the old algorithmic model as they are now. Until then, we will continue to pay a heavy price for buggy software systems.

Autotest (1)

ButcherCH (822663) | more than 8 years ago | (#15459572)

A tool not listed is autotest http://se.inf.ethz.ch/people/leitner/auto_test/ [inf.ethz.ch] which makes use of Eiffels contracts. I did use it to some degree and it can realy help to find some errors.

Wetware sceptics. (0)

Anonymous Coward | more than 8 years ago | (#15459602)

The interesting thing about this story is that it shows the neverending battle to get ones and zeros to do what we want. And yet biological computers don't have stories written about tool so and so amoung a long list of tools that'll make your biological computer finally do what you want it to do. It does what it needs to do regardless of sceptic, or convert.

Did they use these tools in developing them? (2, Interesting)

140Mandak262Jamuna (970587) | more than 8 years ago | (#15459855)

One of my friends did a project for masters. Some simple code that will read the submitted source, count the number of code lines, comment lines, average number of lines per function etc and print out some stats about "quality of the code". His prof ran his project code source itself as the input! It flunked itself for not having enough comments, for having functions that were too long, not breaking up large source files, for using too many nested levels of code etc

Microsoft sells collaboration software and project management software. And its products are not shipping any faster.

These guys are touting alloy and tools, sounds like a old CASE wine in new bottle. Did they use these tools to design the tools? Atleast would they use these tools and alloy to create the next version? Could they demonstrate that these tools can handle a project of that complexity? And produce provably better code with no bugs?

Please forgive me if I am underwhlemed.

Formal Modeling/Model Checking (5, Informative)

aeroz3 (306042) | more than 8 years ago | (#15459929)

The point of these tools, is to simply verify the consistency of a design, not to execute or examine existing source code. The steps involved are:
1) Come up with software design
2) Implement software design in one of these tools (model it in Z, or as a state machine using fsp/ltsa)
3) Use said tool to verify the consistency of the design.

Now, this activity, of course, takes a lot of time, and is unlikely to ever be of any use to your average J2EE/Ajax/Enterprise application. Areas where they CAN be of use are in things such as life-critical systems. For instance medical devices, or air plane control systems. Using something like FSP/LTSA you can model, check, and verify that your design does not every allow the system to enter into an invalid state. Now, remember, this says nothing about the final code, there is a separate issue of the code not matching the design, but it is possible to verify that the design does not ever lead to invalid states.

snake oil software design rather than genuine .... (2, Interesting)

3seas (184403) | more than 8 years ago | (#15460052)

...science of abstraction physics.

yes the software industry is still playing with magic potients and introductary alchemy.

Why is a simple answer to give.

money, job security and social status.

Someone posted that they were warned that their jobs would become extinct upon automated software development.
but the fact is.... who but those who have their job to risk....are in a position to employ such tools?

snake oil software development is a self supported dependancy... far from genuine computer software science (of which we haven't really seen since the US government held the money carrot up for code breakers during WWII.

For more information... (2, Informative)

dwheeler (321049) | more than 8 years ago | (#15460216)

The referenced article has a lot about formal methods tools (including "light" formal methods tools). See the paper High Assurance (for Security or Safety) and Free-Libre / Open Source Software (FLOSS)... with Lots on Formal Methods [dwheeler.com] for FLOSS programs that support this. For a list of some tools that look for security vulnerabilities, see the FlawFinder web site [dwheeler.com] , which links to others.

Alloy is a cool tool, if it does something you want done. But nobody should be fooled into thinking that you can just run Alloy and suddenly your code is perfect. Alloy just helps you check out a model based on set theory, etc... it's a long distance from models like that to the actual code.

in summary (1)

3seas (184403) | more than 8 years ago | (#15460302)

even before there is code written there is failure of developing a design science
Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Create a Slashdot Account

Loading...