×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

A Better Way To Program

Soulskill posted more than 2 years ago | from the creativity-by-slider-bars dept.

Programming 467

mikejuk writes "This video will change the way you think about programming. The argument is clear and impressive — it suggest that we really are building programs with one hand tied behind our backs. Programmers can only understand their code by pretending to be computers and running it in their heads. As this video shows, this is increadibly inefficient and, as we generally have a computer in front of us, why not use it to help us understand the code? The key is probably interactivity. Don't wait for a compile to complete to see what effect your code has on things — if you can see it in real time then programming becomes much easier."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

467 comments

Great but... (5, Interesting)

Anonymous Coward | more than 2 years ago | (#39319409)

Seems like it mostly only applies to GUI programming or programming with results expressed through a GUI. What about, say, kernel programming?

Re:Great but... (5, Funny)

viperidaenz (2515578) | more than 2 years ago | (#39319507)

Visualise them kernel then. They didn't have any problems in The Matrix.

Re:Great but... (5, Insightful)

spyked (1878060) | more than 2 years ago | (#39319663)

That sounds simple but it isn't. While you could theoretically do this from a virtual machine, the difference between visualising” it and testing it on real hardware is significant especially when it comes to device drivers, which are known to be the most common source of bugs in kernels.

Plus verifying a kernel or a compiler is a pretty hard problem, it's a miracle if you manage to do it in decent time, let alone manage to visualise it in any way.

Re:Great but... (0)

Anonymous Coward | more than 2 years ago | (#39319791)

WOOOSH

I mean, right over your head at mach 2.

Re:Great but... (1)

Apothem (1921856) | more than 2 years ago | (#39319519)

Hell, I'd like to see someone design another programming language with a programming language if that is the case. If you've got something as based off of GUI design as this is, why not right?

Re:Great but... (1)

Anonymous Coward | more than 2 years ago | (#39319675)

Seeing as you watched the entire hour-long video in 5 minutes, I think your question here might be answered by the remainder.

Re:Great but... (4, Informative)

blahplusplus (757119) | more than 2 years ago | (#39319771)

You didn't see the point when he showed how you could find bugs in algorithms as you typed them.

Re:Great but... (5, Insightful)

Junta (36770) | more than 2 years ago | (#39319855)

In the video he covers that as well. Well, at least he conceptually says its covered, I disagree...

Lets start with his abstract example. His binary search on the surface looks straightforward and he wanted to portray it as magically finding bugs as he got a float in one instance and an infinite loop in another. However the infinite loop example was found because he *knew* what he was doing as he intentionally miswrote it to start with and intentionally changed the inputs in accordance with this knowledge. There are a few more possibilities that you have to *know* to try out. For example, he didn't try a value that was lower than the lowest (would have panned out), he didn't try a value omitted from the list but still higher than the lowest and lower than the highest (which also would have been fine) and he didn't try an unordered list (which is incorrect usage, but accounting for incorrect usage is a fact of life). He didn't try varying dataset sizes (in this algorithm doesn't matter, but he has to *know* that) and different types of data. You still have the fact that 'B' is smaller than 'a' and all sorts of 'non-intuitive' things inherent in the situation.

Now consider that binary search is a freshman level programming problem and therefore is pretty low in terms of the complexity a developer is going to deal with. Much of software development will deal with far more complicated scenarios than this, and the facility doesn't *really* cover even the binary search complexity.

I know I may sound more negative than is appropriate, but his enthusiasm and some people's buy-in can be risky. I've seen poor developers suckered in by various 'silver bullets' and produce lower quality code because they think that unit test or other mechanisms passed and they can reast easy. Using these tools is good, but always should be accompanied with some wariness to avoid overconfidence.

Re:Great but... (3, Insightful)

ghostdoc (1235612) | more than 2 years ago | (#39319919)

So your point, basically, is that programming is all about knowing what could go wrong with your code?

Not a bad definition actually...it would certainly explain why coding productivity increases in step with experience; you've made those mistakes already and now know to avoid them.

Re:Great but... (5, Funny)

sjames (1099) | more than 2 years ago | (#39319943)

What about, say, kernel programming?

If you enter a syntax error, you get a video of Orville Redenbacher ripping you a new one.

Connecting to your creation in Clojure (4, Interesting)

slasho81 (455509) | more than 2 years ago | (#39319411)

Here's an implementation [chris-granger.com] of Bret's ideas in Clojure.

Re:Connecting to your creation in Clojure (5, Informative)

justforgetme (1814588) | more than 2 years ago | (#39319657)

And here is the vimeo video [vimeo.com] for those who want to tear their eyes out when
visiting i-programmer and their 180px content column.

Re:Connecting to your creation in Clojure (0)

Anonymous Coward | more than 2 years ago | (#39319757)

Now if only someone would produce a link to transcript of this video...

Re:Connecting to your creation in Clojure (0)

Anonymous Coward | more than 2 years ago | (#39320085)

No flash or html5 video, deaf or using lynx?

They invented the debugger! (4, Insightful)

Giant Electronic Bra (1229876) | more than 2 years ago | (#39319425)

Yeeehaaa! ;)

The tough problems aren't about running the code and seeing what happens, they're about setting up very specific situations and testing them easily.

Re:They invented the debugger! (4, Insightful)

vlm (69642) | more than 2 years ago | (#39319499)

The tough problems aren't about running the code and seeing what happens, they're about setting up very specific situations and testing them easily.

Handling non-specific unknown/unpredicted situations gracefully is also tough. Unsanitized user input, crazy failure modes, failure in other code making your state machines go bonkers... The trendy thing to do is just throw your hands up in the air and tell the user to reboot and/or reinstall, but that's not so cool.

Maybe another way to phrase it is at least one of the specific situations needs to be the input of a random number generator doing crazy stuff.

Your Arabic to Roman numeral converter accepts a INT? Well it better not crash when fed a negative, or zero, 2**63-1 (or whatever max_int is where you live), and any ole random place in between

Re:They invented the debugger! (5, Interesting)

ktappe (747125) | more than 2 years ago | (#39319707)

The tough problems aren't about running the code and seeing what happens, they're about setting up very specific situations and testing them easily.

another way to phrase it is at least one of the specific situations needs to be the input of a random number generator doing crazy stuff.

Your Arabic to Roman numeral converter accepts a INT? Well it better not crash when fed a negative, or zero, 2**63-1 (or whatever max_int is where you live), and any ole random place in between

This is still archaic thinking. A much more efficient way would be for the IDE to, when specifying a variable, ask there & then what the boundaries of the variable should be. Then the compiler could error any time it saw a situation where the variable could be (or was) handed a value outside those boundaries. Programmers should not be having to catch weird situations over and over; that's what computers are for. Allowing a variable to be any possible INT/FLOAT/REAL just doesn't make any sense in many situations so I'm quite curious why we're still having to even talk about random number generators for debugging & testing. It feels like we're still working for the computers instead of the other way around.

Re:They invented the debugger! (0)

Anonymous Coward | more than 2 years ago | (#39319787)

Look up "halting-problem"

Re:They invented the debugger! (0)

Anonymous Coward | more than 2 years ago | (#39319907)

That's not the halting problem, that's range checking on all of your variables. You don't have to know what the value of a variable is at any point in time (eg: actually running the program), you just have to know the range that variable can be, and the ranges of the assignment expressions that assign to that variable. Those ranges are further composed of more ranges (either return types from functions, constants, or more variables with their own ranges attached), and can be computed at compile time. Now, checking the bounds of dynamic arrays is not possible (this is the halting problem, as you mentioned), but that is not what the GP was talking about.

What this guy means is that he wants a language where the compiler checks your variable ranges for you, which is completely doable.

Now, this becomes more difficult when you factor in the fact that the same variable declarations (pointers, integers) can change sizes between hardware architectures, but the compiler already knows what hardware it is compiling for, so this can be a simple database query.

Also note that this was already done (with proprietary software) to find the logic error for the Arianne rocket that blew up. And it did find the place where they assigned a floating point range to a short and didn't check that the float was within the range of the short.

Maybe next time instead of simply replying with a generic non-answer you could actually think about the question and form a coherent response and explain why you came to that conclusion.

Re:They invented the debugger! (1)

TuringTest (533084) | more than 2 years ago | (#39319977)

That's why you put a human in charge, to recognize the infinite loop conditions and fix them by stopping the inference engine.

Re:They invented the debugger! (1)

Anonymous Coward | more than 2 years ago | (#39319895)

Because we still didn't reach the singularity, when computer will know what you want without your explanations.

Sure, even today with sufficiently advanced type systems you can have compiler verifiable types like "int from 0 to 9" or "positive float" or "a power of two", or even "two non-overlapping intervals" or "list sorted in ascending order" - but this makes programs so verbose that it's not viable for general usage, only for the times when verification is a must.

There won't be happiness until you can tell the computer "Make it cool" and it replies with "Did it 5 minutes ago".

Re:They invented the debugger! (0)

Anonymous Coward | more than 2 years ago | (#39319921)

A much more efficient way would be for the IDE to, when specifying a variable, ask there & then what the boundaries of the variable should be. Then the compiler could error any time it saw a situation where the variable could be (or was) handed a value outside those boundaries. Programmers should not be having to catch weird situations over and over; that's what computers are for.

This is the core idea in functional programming languages. Unfortunately there is not much interest in any language which is not an alternative notation for assembly.

Re:They invented the debugger! (3, Insightful)

Anonymous Coward | more than 2 years ago | (#39319947)

This is still archaic thinking. A much more efficient way would be for the IDE to, when specifying a variable, ask there & then what the boundaries of the variable should be. Then the compiler could error any time it saw a situation where the variable could be (or was) handed a value outside those boundaries. Programmers should not be having to catch weird situations over and over; that's what computers are for. Allowing a variable to be any possible INT/FLOAT/REAL just doesn't make any sense in many situations so I'm quite curious why we're still having to even talk about random number generators for debugging & testing. It feels like we're still working for the computers instead of the other way around.

A good Ada compiler could do this almost thirty years ago, due to the flexibility of the type system. Of course, Ada '83 looked hopelessly complex compared to the other languages of the time such as K&R C. Put that together with the fact that the major users were bureaucratic mega-corps involved in government aerospace projects and it acquired a horrible reputation as a bloated mess.

Time moved on. C begat C++ and C++ started adding features without much evidence of overall direction. For example it was never an explicit design goal for C++ templates to be Turing-complete. Features were added one after another, and one day someone pointed out that they had gone so far that templates could now be considered to be a language in themselves. Is it a good idea for a language to contain its own embedded meta-language? Is this something that results in maintainable, understandable code that can be analysed successfully? These questions did not matter because template meta-programming 'just happened' as C++ features agglomerated.

Nowadays the most recent version of Ada (Ada 2012) is probably one of the more straightforward and better designed languages, but its early reputation is unshakable. That's life I guess.

Strict Typing (2)

pavon (30274) | more than 2 years ago | (#39319975)

We have languages that force you to define the allowed range of an integer every time you define it (like ADA), and static analysis tools that find many of these problem. Entering this information into a dialog box wouldn't be any faster than typing it into the source code. The problem is that programmers consider these languages to be too cumbersome, and using such tools on other languages has too many false positive. They would rather have freeform languages like python, which are faster to program and easier to read, but don't catch these types of errors for you, and instead require more discipline in unit testing (which is good to have anyway).

Re:They invented the debugger! (0)

Anonymous Coward | more than 2 years ago | (#39320005)

A much more efficient way would be for the IDE to, when specifying a variable, ask there & then what the boundaries of the variable should be. Then the compiler could error any time it saw a situation where the variable could be (or was) handed a value outside those boundaries.

That's already done in for example the asp.net dynamic data (and ruby-on-rails...) , although it only effects GUI input validation and content generation. Getters and setters for variables are all that is needed for validating back end code. Anything more would just be synthetic sugar for those.

Link: http://msdn.microsoft.com/en-us/library/cc488545.aspx

Re:They invented the debugger! (1)

Anonymous Coward | more than 2 years ago | (#39320067)

This is still archaic thinking. A much more efficient way would be for the IDE to, when specifying a variable, ask there & then what the boundaries of the variable should be. Then the compiler could error any time it saw a situation where the variable could be (or was) handed a value outside those boundaries. Programmers should not be having to catch weird situations over and over; that's what computers are for. Allowing a variable to be any possible INT/FLOAT/REAL just doesn't make any sense in many situations so I'm quite curious why we're still having to even talk about random number generators for debugging & testing. It feels like we're still working for the computers instead of the other way around.

You would have loved Ada.

Conjecture. (3, Insightful)

tysonedwards (969693) | more than 2 years ago | (#39319431)

Until someone actually creates this new mythical language that is proposed by Bret, than this is all conjecture that a hyper-efficient, overly intuitive programming language that can provide immediate feedback would be hyper-efficient, overly intuitive, and provide immediate feedback.

Basically, the video referenced by the article is no different than "wouldn't it be nice if we were no longer dependent on foreign oil... that would make so many things so much easier!"

Re:Conjecture. (4, Funny)

TheRaven64 (641858) | more than 2 years ago | (#39319495)

Yes, if only there were existing systems that worked that way. Such as the Lisp environment from 1958 or the Smalltalk environment from 1976. Such revolutionary new ideas about programming! I wonder if he will invent automatic refactoring tools next...

Re:Conjecture. (1)

vlm (69642) | more than 2 years ago | (#39319527)

Yes, if only there were existing systems that worked that way. Such as the Lisp environment from 1958 or the Smalltalk environment from 1976. Such revolutionary new ideas about programming! I wonder if he will invent automatic refactoring tools next...

Lisp.. Smalltalk... Oh wait, I've got a FORTH answer oh no wait thats just the third... (Sorry bad joke, makes more sense if you know what FORTH is)

Gotta admit that the Venn diagram of languages with that kind of environment and "write only languages" nearly perfectly overlap. If someone would make a real time dev system for Perl and Basic then we'd have near perfect overlap.

Re:Conjecture. (2)

TheRaven64 (641858) | more than 2 years ago | (#39319597)

Not sure why you'd think Lisp or Smalltalk are write-only. Smalltalk is heavily used in the financial industry specifically because it is easy to quickly make significant changes to legacy code.

Re:Conjecture. (1)

KDR_11k (778916) | more than 2 years ago | (#39319647)

How about Java? If you run your program in debug mode inside Eclipse it'll do hot code replacement every time you save (doesn't ALWAYS work and obviously won't retroactively change results of previous calculations).

Re:Conjecture. (3, Informative)

Brummund (447393) | more than 2 years ago | (#39319655)

"Whoever does not understand LISP, is doomed to reinvent it".

(As a practical example, I used OpenGL in Lisp 10 years ago, and it was great to modify the code while the system was running.)

Re:Conjecture. (5, Insightful)

SJS (1851) | more than 2 years ago | (#39319733)

Smalltalk and Lisp are a good example, and they show (to me) that the problem isn't the language. The hard part about programming isn't the code.

The hard part about programming is understanding and decomposing the problem. If you're not any good at that, then no matter what language you use, you're going to struggle and produce crap.

This isn't to say that languages aren't important -- different languages lend themselves to particular problem-spaces by suggesting particular solutions. Picking the right language for the problem is as important as picking the right wrench for the nut.

But there will never be a DWIM language, because the big problem is getting the programmer's brain wrapped around what needs to be done. Once that's done, what's left is only difficult if the programmer doesn't have the figurative toolset on hand.

languages (1)

lkcl (517947) | more than 2 years ago | (#39319981)

The hard part about programming is understanding and decomposing the problem. If you're not any good at that, then no matter what language you use, you're going to struggle and produce crap.

there was a report a few years ago about an entrepreneur who *refused* to employ computer science majors. he only employed english language majors. the reason was he said that it was easier to communicate with english language majors in order to teach them programming, and they were more effective and also worked better when introduced into teams, than the people who had been taught programming at university.

so there is definitely something to learn here. but my god if imperial college had tried to get me to read jane austen instead of teaching me object-orientated and parallel computing architectures i would have hit the roof. mind you if they'd asked me to scan the book in and do some parallel processing on its contents then.. hmmm.... :)

Re:Conjecture. (2)

TuringTest (533084) | more than 2 years ago | (#39319989)

Sure the difficult part is the problem. But if you have the computer doing the trivial and menial parts, the programmers mind will have more brain resources to expend analyzing the problem and thus it will be easier to solve the difficult part.

Re:Conjecture. (1)

phantomfive (622387) | more than 2 years ago | (#39319735)

"wouldn't it be nice if we were no longer dependent on foreign oil... that would make so many things so much easier!"

The funniest thing about this one is that it wouldn't matter. Even if we were no longer dependent on foreign oil, so many other countries around the world would be, that the middle east would still be getting funding for their terrorist programs.

Re:Conjecture. (0)

Anonymous Coward | more than 2 years ago | (#39319937)

And a chimpanzee could be trained to reproduce the works of Shakespeare! All you need a word processor with clear indicators for 'shakespeareness' feedback!
Seriously though, probably the most important tools you'll ever have are pen and paper (or their equivalents), so you can scribble-scribble while you contemplate.

Wait (4, Insightful)

bytesex (112972) | more than 2 years ago | (#39319435)

Someone re-invented scripting languages ?

Re:Wait (4, Insightful)

gl4ss (559668) | more than 2 years ago | (#39319743)

well. yes. but sold the idea as having someone(the computer, AI) to magically setup the situation you were thinking of when writing that code too.

it's very easy to demo the benefits with something that for example just draws something, but such game engines have been done before and isn't really that different from just editing the code in an svg- however as you add something dynamic to it... how is the computer supposed to know, without you instructing it? and using mock-content providers for realtime ui design is nothing new either so wtf?

Not intended for slashdot (0)

Anonymous Coward | more than 2 years ago | (#39319465)

Interesting article this, just not for the slashdot crowd.
Why? The subject is OK, it has to do with programming (so no discussion on whether YRO or similar is interesting to the /. crowd).
The problem is the 'article' itself: it is almost shorter then the summary (and: /. users like their summary shorter then the article, mikejuk did his best!).
The reason the article can be this short is that it links to a video. No problem you say? Surely in a few posts someone puts a transcription online.

The moment I am writing this, this could be first post. Not that I think it will: it takes me too long too write.
But at least I am sure that I will be BEFORE any one with an important comment on the article, AS IT TAKES 1 HOUR TO SEE THE VIDEO.
And yes, the article says that it is somewhat slow to begin with, so the interesting bit is probably somewhere beyond the first 20 minutes or so.

Why this rant? The article looks interesting, and I was planning to read it. Until I saw the 1 hour remark, surely slashdot editors do not expect us 'No-I-did-not-read-the-article-and-even-skipped-part-of-the-two-paragraph-summary-regulars' to actually view the video? It would be hilarious if after 10 minutes of the video it suddenly turned into something else, only no one on /. will actually get this point.

Re:Not intended for slashdot (2)

plankrwf (929870) | more than 2 years ago | (#39319483)

Just replying to myself, did not plan for the post above (yes, a first post!) to be made anonymous...

Re:Not intended for slashdot (2, Insightful)

Barbara, not Barbie (721478) | more than 2 years ago | (#39319991)

It was in the firehose a few days ago and I down-modded it because (1) the idea was stupid, (2) the article did not give anything but the vaguest hand-waving of explanations, and (3) there is no way someone is going to sit through an hour video if the blogger can't even bother to provide a half-decent summary - which is why I labeled it as binspam ... it's just link bait.

Fine. (2)

Kickasso (210195) | more than 2 years ago | (#39319469)

Now go program me a nuclear reactor control system with this.

Re:Fine. (0)

Anonymous Coward | more than 2 years ago | (#39319503)

That might be one area where it is inappropriate to use this type of tool. Seems like a pretty small slice of the overall software development pie, though, so I'm not sure what your point is.

Re:Fine. (2)

Kickasso (210195) | more than 2 years ago | (#39319599)

So nuclear reactors are out. Can you program a firewall this way? What about an airline ticket reservation system? A compiler, maybe? A web browser?

His design is very good for instant unit testing, which is fine and dandy and I'm all for it.

Re:Fine. (2)

plover (150551) | more than 2 years ago | (#39319949)

Are you saying that trial and error isn't appropriate for a system that cannot fail even one time?

I think it'd be very appropriate to build reactor control software with tests. Lots of tests. Lots and lots of tests. And you can simulate every device out there, you can simulate what happens when pressure builds or releases unexpectedly, you can simulate what happens when the operator pours his pepsi down the control panel and provides you with non-sensible inputs, etc.

Matter of fact, I can't see any other way to build safety critical software. Not just testing the hell out of it, but designing it to be testable in the first place.

is it? (0)

Anonymous Coward | more than 2 years ago | (#39319481)

A debugger the cloud? to help us solve NP-Hard problems?

An observation... (1, Insightful)

Antony T Curtis (89990) | more than 2 years ago | (#39319491)

If you need to "run" code, either in your head or on a computer, in order to see what it's going to do, you're probably not really programming and you're definitely not an engineer.

Re:An observation... (1)

Anonymous Coward | more than 2 years ago | (#39319535)

Guess that we had better go fire all of those damn physicists for wasting their time on their "models".

Re:An observation... (0)

Alex Belits (437) | more than 2 years ago | (#39320035)

Science does not work the way you think it does.
Also scientific research is the opposite of engineering -- research produces knowledge out of interaction with reality, engineering uses knowledge to produce interaction with reality.

Now go, punch yourself in the face.

Re:An observation... (3, Insightful)

vlm (69642) | more than 2 years ago | (#39319589)

If you need to "run" code, either in your head or on a computer, in order to see what it's going to do, you're probably not really programming and you're definitely not an engineer.

Would be a better post if you explained the "right way", hopefully its not mysticism.

Whats wrong with processing this line of perl in your head according to the rules to figure out what it does? (admittedly I have no idea why the heck you'd want to do this, but its the simplest example I can think of using about 3 key perl concepts...)

s/(.*):(.*)/$2:$1/;

The other aspect has to do with new code vs maint (even maint of my own code). If I have no idea what I'm doing with my own freshly written code, thats just wrong... but old code always has some element of intense CSI work to figure out what it does before I can modify it..

Re:An observation... (2)

bbn (172659) | more than 2 years ago | (#39319765)

Looking at your expression I do not think I "ran" it in my head. Rather i "understood" it the same way I look at an equation.

Generally I do not find myself simulating the running of code when I look at a program. I only do that when the code is overly complicated and hard to understand. Or if it is a clever algorithm that I do not already understand.

This is even more true for functional languages. Looking at some Haskell program it is not even always clear how the computer is going to "run" it. You look at it as a set of equations.

Re:An observation... (5, Funny)

Anonymous Coward | more than 2 years ago | (#39319805)

I'd guess it slips a $1 and $2 bill into a stripper's titties by the looks of it.

Re:An observation... (2)

vlm (69642) | more than 2 years ago | (#39320047)

I'd guess it slips a $1 and $2 bill ...

Oh so close. She dances over to you with somebody elses one dollar bill in the left cup and a two dollar bill in the right cup and this magically swaps the $1 into the right cup and the $2 into the left cup without using a third cup, or even a hand. So it is essentially the popular internet meme "1 Girl 2 Cups". Even worse, a sharp eye can see the process involves a colon in the regex... This is going downhill fast...

Three obvious ? perl concepts and the pitfalls surrounding them are:
1) You're operating on the default variable $_. If a single variable chugs thru a dozen processing steps you don't have to name it each time. This is a complete statement, not just the right side of an equation.
2) s/"before"/"after"/ is the simple search and replace operator. Guess how many times it S+Rs? There are options to control this other than the default and noobs always assume the default for s is what they want and it never turns out that way...
3) If your regex matches something in parenthesis, later on you can access whatever that found using positional variables $1, $2, you get the idea. This is why perl is not amused at the idea of user variable names beginning with digits. Why does perl count regex matches from $1 instead of $0 like you'd expect? Well $0 means something completely different, thats for sure...

Re:An observation... (1)

KZigurs (638781) | more than 2 years ago | (#39320059)

okay, never played with perl (I know, I know) - it takes line of input and swaps two blocks around first found ':' character?

Re:An observation... (1)

Alex Belits (437) | more than 2 years ago | (#39320075)

Your expression does not work you think it does if there is more than one colon in the input... ...bitch!

Re:An observation... (1)

Anonymous Coward | more than 2 years ago | (#39319613)

And if you dont run it thru the debugger and STEP thru it you are just guessing what it will do.

Many time I step thru my code to find some assumption I was making that is invalid.

You can write code that compiles with 0 warnings on the highest levels, can get thru the most stringent of lint checks, passed dozens of code reviews, pair wise coded, etc, etc etc. But until you run it and step thru and see you will never know.

Decent engineers know it is going to work. But the good ones test it too. They know people are screwups. Also you have built something do you know for a fact you are making the same assumptions of the other guy who wrote the function you are depending on?

Best complement I get from fellow engieres "your code is easy to read and when it screws up it does it in a way I can tell what is wrong"

Re:An observation... (1, Insightful)

Alex Belits (437) | more than 2 years ago | (#39319959)

And if you dont run it thru the debugger and STEP thru it you are just guessing what it will do.

If you are not right about behavior of your code, you are not qualified to write it in the first place.

Many time I step thru my code to find some assumption I was making that is invalid.

Then go kill yourself. People like you are the reason why there are bugs everywhere.

You can write code that compiles with 0 warnings on the highest levels, can get thru the most stringent of lint checks, passed dozens of code reviews, pair wise coded, etc, etc etc.

Compiler warnings are about things you are supposed to know -- a good programmer only gets them on typos or after removing things thus leaving something unused in the code.

But until you run it and step thru and see you will never know.

LISTEN, EVERYONE!

This is what is wrong with those people. They think, they can write random shit, single-step through it, do more random changes, and repeat until it seems to run. Their code only works by accident. Get them out of programming.

Re:An observation... (1)

artor3 (1344997) | more than 2 years ago | (#39319659)

WTF? How do you see what's going on in the code, if not by running it on a computer or in your head?

Let me guess, if you need to add numbers with a calculator or in your head, then you don't really understand arithmetic?

Re:An observation... (1)

gl4ss (559668) | more than 2 years ago | (#39319763)

he means that you should code it in your head/paper/screen to some meta-language first("design") - then you're a real computer scientist engineer programmer, otherwise you're just a stinking hacker! waterfall or death!

Re:An observation... (0)

Anonymous Coward | more than 2 years ago | (#39319889)

Sorry bro, but no matter how much designing you do, you still have to run through low level algorithms in your head/on paper/on the computer during implementation time. There is no way around it. Being an engineer means you have good spacial abilities which makes this all the more easier.

On a side note, my boss always yells at me and say, "If you spent half the time you spend designing and talking about code into actual coding, we'd get a lot of work done around here." My boss is obviously a fucking noob.

Re:An observation... (0)

Anonymous Coward | more than 2 years ago | (#39320105)

The latest trend is called S.C.R.E.A.M. programming. It's very cool and only for high priests of software development.

Re:An observation... (1)

DaveGod (703167) | more than 2 years ago | (#39319779)

I'd guess probably half of people doing a task involving significant technical skills don't really know what they're doing.

Not to say that so many people aren't capable. It's more that for a significant chunk, once they reach that point then they're not so far off promotion. Then there's another chunk who specialise in something else entirely then found they need to do some programming or whatever as a tool for that.

It's all about efficient resource management (2)

TuringTest (533084) | more than 2 years ago | (#39319945)

If you need to "run" code, either in your head or on a computer, in order to see what it's going to do, you're probably not really programming and you're definitely not an engineer.

Sure, you can do that in your head. They got to the moon with an abacus and some slide rules, so why do we need computers again?

That you can make it in your head doesn't mean that you should. Human brains are incredibly slow and error-prone at recall and logic, the primary strengths of computers. On the other hand our brains are evolutionary-level good at recognition and pattern matching.

This should make obvious the conclusions in the video: letting the computer do the recall and logic inferences, and the human the thinking and pattern matching, will produce a much more efficient system, two or three orders of magnitude higher than having the engineer produce all the derivations in his head.

It's a shame to the profession that so many slashdotters will laugh and dismiss the idea, and a sign of how incredibly conservative are programmers when it comes to the tools of their trade. In special because these ideas were already implemented and tested in academai before our current batch of business tools (C, Unix and IDEs) were developed, and then abandoned only because they were more difficult to teach even if they were more powerful to use. There, I have said it.

wat (0)

Anonymous Coward | more than 2 years ago | (#39319505)

Sounds like he (The guy in the video) spends too much time thinking about other things and not enough time actually coding.

Silver Bullet Alert (0)

Anonymous Coward | more than 2 years ago | (#39319513)

Every other month someone comes up with yet another Silver Bullet. It is so boring, so fscking boring.Don't /. editors have a little bit more clue than posting such rubbish as news?

Sounds like they have a GUI REPL (4, Insightful)

istartedi (132515) | more than 2 years ago | (#39319529)

Unless somebody wants to give a better executive summary, there's no way I'm weeding through an hour of video. Do they have any idea how many hours there are of "the one video you must see this year" on YouTube?

Re:Sounds like they have a GUI REPL (0)

Anonymous Coward | more than 2 years ago | (#39319593)

There's a long thread about this over at SA, in their SH/SC forum. The discussion there is arguably more interesting than the video itself. :)

not really useful (0)

Anonymous Coward | more than 2 years ago | (#39319711)

Its enough to watch first few mines. IMO this approach is completely invalid. It is nice to see immediately changes to the code on the screen, but this applies only to GUI and only to variables which can have arbitrary values. In real life you cannot iterate over an array and skip some element, transfer money from user with random uid, call not existing function, etc. Also executing invalid (not-yet-completed) code can destroy your data easily and waste your time.

Second part starts in 18:00 and is more interesting, but it won't work in real programs for two reasons:
1) function arguments are usually complex objects and defining one (for testing purposes) is often much more complicated then just writing the method
2) you still need unit tests which do pretty the same
But all in all a debugger which shows variable values for several lines of code could be interesting.

Been There Done That. (4, Informative)

Anonymous Coward | more than 2 years ago | (#39319541)

I've been doing this for years with Python's interactive prompt. I write code and test it on a line by line basis as I'm programming when working with unfamiliar libraries. The code that works is then put into a permanent file for reuse as a script or compiled to an executable for distribution to my end users.

The one video you must see this year (0)

Anonymous Coward | more than 2 years ago | (#39319547)

Do they have any idea how many hours there are of "the one video you must see this year" on YouTube. I wager at least 2 years worth.

It sounds like they have a REPL, possibly with a GUI standing in for readline. Unless somebody has an executive summarty that refutes that, I'm not wasting my time.

Reverse applies (1)

Anonymous Coward | more than 2 years ago | (#39319585)

... and I started to feel like a programmer when I stopped needing to check the effects of my work every other 5 minutes.

What a hack (-1)

Anonymous Coward | more than 2 years ago | (#39319609)

That guy totally lacks abstract thought and is not suited to be a programmer in the first place. He should have followed art classes and learn to use a real paint brush and a real canvas instead. That was he has better control and direct feed back he so desperately craves.

Obligatory Dijkstra (4, Insightful)

1729 (581437) | more than 2 years ago | (#39319621)

"I remember how, with the advent of terminals, interactive debugging was supposed to solve all our programming problems, and how, with the advent of colour screens, "algorithm animation" was supposed to do the same. And what did we get? Commercial software with a disclaimer that explicitly states that you are a fool if you rely on what you just bought."

From http://www.cs.utexas.edu/~vl/notes/dijkstra.html [utexas.edu].

Re:Obligatory Dijkstra (2)

Barbara, not Barbie (721478) | more than 2 years ago | (#39320041)

I'd argue that colour screens did give us a big, obvious, and immediate improvement - syntax highlighting. No having to learn some new technique or method - just open your existing code and the editor highlights it accordingly. Off-hand, I can't think of anything else that, by itself, had as much of an impact across all programming languages.

Re:Obligatory Dijkstra (1)

1729 (581437) | more than 2 years ago | (#39320093)

Yeah, I like syntax highlighting (and color screens in general), as well as interactive debuggers. Certainly, we can write code faster and find bugs more efficiently with all the tools available today. But it's the "silver bullet" claims that I'm skeptical about.

My boss sent me this drivel as well (4, Funny)

AdrianKemp (1988748) | more than 2 years ago | (#39319623)

It is the most worthless, dumbass thing I've ever had to sit through.

It's just another "wouldn't it be great if computers programmed themselves" video by someone too stupid to tie their own shoes.

I know what the code I'm writing is going to do *before* I start writing it, as I hope for the love of god most programmers do.

In fact, the biggest plague on programming ever has been this concept that changing what you want fifteen times along the way is perfectly okay and we just need to adapt methods to that idiocy. You don't need any of this crap if you just know what you want ahead of time.

Re:My boss sent me this drivel as well (1)

Anonymous Coward | more than 2 years ago | (#39319745)

I know what the code I'm writing is going to do *before* I start writing it, as I hope for the love of god most programmers do.

Some of us occasionally write programs with bugs. It's good to hear you don't.

Re:My boss sent me this drivel as well (1)

KreAture (105311) | more than 2 years ago | (#39319811)

Agreed!
When you write code you should be 100% sure what the code will do. If not you don't really know how to program and can be categorized as belonging to the "poke at it untill it works"-crowd and be banned from comercial programming all together. (Members of that crowd should stay away from kernels and other important open source projects too, please.)
Working with such coders will be frustrating at best, and the death of projects at worst.

Re:My boss sent me this drivel as well (1)

bertok (226922) | more than 2 years ago | (#39319833)

Maybe an analogy would actually be better...

Think of programming as a Mathematician developing a new maths proof. The Mathematician may not know how to get to his goal, but that doesn't mean that the solution isn't robust, or that he needs a calculator at every step.

Similarly, a good programmer can develop robust and easy-to-maintain code even without an a-priori design, or automated assistance.

Where machine-assistance comes in is that I can see situations where a computer can assist the Mathematician in a way that a calculator can't -- through things like formal proof verifying software, or software like Mathematica that can be used to perform difficult and error-prone symbol manipulation steps like simplification, factorisation, integration, etc... Also, it's possible to use computers to perform brute-force numeric verification, and even reverse-engineering, which can give useful hints. There's software methods for taking a numeric result, and "guessing" what symbolic expression could have produced it, which can be very useful for someone who's become stuck and just needs a hint of the form of an expression.

Similar methods have been applied by programmers for years. Think syntax highlighting, live error checking, or static code analysis. With sufficient computer power, these methods could be extended further. For example, wouldn't it be great if code could be given random inputs (or recorded inputs), run bunch of times, and then each section of the code highlighted based on hit-rate or time taken? A profiler can do that for you now offline, but I'd say we have the CPU performance to do this live with small blocks of code.

Re:My boss sent me this drivel as well (1)

AdrianKemp (1988748) | more than 2 years ago | (#39319923)

You are wrong, and I hope in time you see this.

Programming is *not* like a mathematician stumbling upon a new proof. Programming is in fact extremely rigid and based upon a series of existing mathematics.

There is no room in *professional* software for stabbing in the dark. There is all the room in the world for that in hobby programming done by professionals or otherwise.

I don't expect a carpenter to build a house without a blueprint, it'd turn up as shitty as most modern software.

Re:My boss sent me this drivel as well (0)

Anonymous Coward | more than 2 years ago | (#39320023)

You don't need any of this crap if you just know what you want ahead of time.

and i am sorry boy , things don' t actualy work in this way. Not in the real world at least. I guess your boss already told you this...

Re:My boss sent me this drivel as well (1)

Twinbee (767046) | more than 2 years ago | (#39320065)

Wow, to be able to get code right first time without ever debugging, you must be super-human. Perhaps we should throw away the debugger altogether, and any debugging output. While we're at it, throw away any regression or unit testing. - they're only for morons.

This interactive language already exists. (4, Interesting)

Anonymous Coward | more than 2 years ago | (#39319645)

It's called Python. I use the interactive interpreter to test snipets of code al the time. This makes me 4 times more productive than Java, for example.

VB Redux? (1)

kimvette (919543) | more than 2 years ago | (#39319755)

The much-hated VB offers much of what you're looking for. Like it or hate it, it is really good for prototyping and for rapid development. The only drawbacks is that it encourages bad code and marries you to Windows.

This concept has already been in use for some time (2)

danomatika (1977210) | more than 2 years ago | (#39319813)

... in the creative coding community.

Patchers like Max and Pure Data allow for realtime graphical programming and live coding environments such as Fluxus exist for realtime graphics and sound. Max [cycling74.com] was originally written by Miller Puckette in the mid 80s for realtime control of DSP for computer music at IRCAM and Pure Data [wikipedia.org], started in the mid 90's, is his open source take on addressing some of it's design issues. Fluxus [pawfal.org] originates from around 2000 as is a live 3d engine for performance using a Lisp varient as the on screen scripting language.

Yet another case of artists/scientists providing a working solution to a particular problem not apparent to other disciplines. Too bad Bret dosen't cite these examples. Perhaps he dosen't know of them?

Bret's argument that realtime feedback is important to creative flow is spot on. I don't think he's calling for the use of this approach as a panacea. Naturally it won't work in all cases but anything that helps with problem solving is welcome in my book. It's not a replacement for deep understanding but really allows you more creative freedom which, as other posted have noted, is useful in creative graphics/sound programming.

But we already have it! (2)

Cyberax (705495) | more than 2 years ago | (#39319849)

We already have something like it: IDEs for typed languages with strong inspections.

When I'm writing something my IDE helps me by highlighting possible errors or undesirable effects. That's not the 'visualization' stuff the author's talking in the article, but it's actually useful.

it is about visual, rapid prototyping (1)

Anonymous Coward | more than 2 years ago | (#39319861)

The conjecture that the tree that you are watching is the result of the code you are pointing in the wrong direction: the tree is a representation of the code.
In this way you may start to forsee how this may be related to programming related to non-UI stuff.
Representation of SW is already a candidate silver bullet in The Mythical Man-Month(Brooks , stuff to read) . The problem with sw rapresentation is that as long as sw has multiple concerns its representation should have multiple dimensions. So we invented multiple dimensions for software representation using different charts to represent different dimensions (think about UML) . Guess what? multiple dimensions are mind consuming and counter-intuitive , you may describe them as very viscous notations (Cognitive Dimensions of Notations ,Green 1989) . So we are fucked up again. But if you can find a complete-enough single-dimension representation of your sw domain than you are able to enter the golden kingdom of visual rapid prototyping.
Usualy you can only afford to design for requirements that can be proven wrong or correct by a disciple of Dijkstra but you are not able to design for something like beauty. You can not design in the engineering sense to cope with ambiguous or vague requirements BUT you can prototype for it. And if you can rapid-prototype it 's not bad at all.

understanding by doing... faster (1)

lkcl (517947) | more than 2 years ago | (#39319925)

the extension of this idea is to use evolution-style techniques. from an automated perspective, that means deploying genetic algorithms to improve the code. unfortunately that means that good tests have to be written in order to verify that the code so (eventually) generated is actually "correct". many people would fail to go to the trouble of writing good enough tests.

yes - the alternative is seen to be to "read every line of code and make it run in your head".

i can't handle that. a) i can't be bothered b) i have some strangeness going on in my tiny brain that makes line-by-line logic rather hard to do. so there's an intermediary style - one in between the line-by-line and the genetic algorithms approach - that i've use for programming, successfully for 20 years: accelerated development cycles.

in essence it's incredibly simple: pack in as many compiles into a day as you can possibly manage. optimise the development environment - and the code structure - such that this is possible. install ccache, install distcc, get a compile-farm, don't do complete rebuilds but make sure that the Makefiles are properly structured to detect changes to source files etc. etc.

by increasing the amount of times that the program is run and tested, you automatically increase the productivity, just as TFA says.

HOWEVER, there is a CAVEAT.

as i found out when the samba team dismissed the work that i had spent three years developing (and proving to my satisfaction by having run tens of thousands of tests over that three year period) if you follow the above development technique it is almost IMPOSSIBLE to actually explain how a particular solution was derived.

the reason is because you literally don't know. you know that it works, because across the entire set of parameters under which the code is utilised, you KNOW that it works. but the actual bugfixes? when presented with your *own* code and asked "why did you do it like this??" you can't exactly answer "well i tried 50 iterations of modifying the code and this was the one that actually worked" when you're expected to answer "the technical answer is that this algorithm is an implementation of a well-known algorithm that is described on page N of K&R's recipes on c programming: let me take you through a line-by-line code review and walk through it with you".

the key here is that the person who is asking you to justify the coding decisions is expecting you to be ABLE to do a line-by-line code walk-through, but because you've never done one of those in your LIFE let alone on the code that you wrote only a few weeks ago...

but the real problem comes when you've developed 5 to 10x more code than anyone else in the group, through this type of technique. not only does it show up your peers as potentially seeming to be incompetent, but it also means potentially that you wrote code that is very very difficult for the average programmer to understand.

that makes for a massive maintenance headache.

there was a survey done over 10 years ago, of the productivity vs salary across several professions. most professions, the ratio of salary to productivity for a range of employees varied by a factor of about 1.5. e.g. one person paid $10k could have a productivity metric of say "2.0" but their colleague paid $20k could have a productivity metric of say "6": (20/6.0) / (10/2.0) = 1.5

but for programmers, the variation came out at a whopping 10 to one discrepancy. in other words, someone who was paid only double the salary of one of their peers could be TWENTY times more productive than their lower-paid colleague.

it would be very very interesting to redo this survey, specifically for programmers, but this time taking into account the *techniques* that they use to develop code.

the plural of anecdote is performance bonus (1)

epine (68316) | more than 2 years ago | (#39320015)

I can't even bring myself to watch this, and I'm generally a compulsive bring-myselfer.

Dijkstra spins in his grave. Somewhere out there, Lamport is guzzling Guinness by the barrel and swinging his underwear around his head, while Knuth plays organ dirges.

The plural of anecdote is performance bonus. That was the VB business model in the late 1990s. This won't work twice. To obtain twice as many programmers at half the price there's now India and China. And they can do math.

The biggest Mistake Today (4, Insightful)

prefec2 (875483) | more than 2 years ago | (#39320021)

Most programmers think, that coding takes the most part of the development. In some cases they would admit that testing is also very time consuming. But the real issue is the understanding of the problem. A lot of programmers design while their code. This results in strange development cycles which also includes trying to understand what the program does. As this can be done with a debugger, an interpreter or a trace analysis tool. The real issue is the problem description. First, understand the problem and its borderline cases. Second, come up with a solution for that problem. And third, try to implement it.

BTW: Most programs of today do not contain that many surprising new algorithms. They collect data, the validate data to some constraints, they store data, they provide an interface to query the data. In desktop applications like Inkscape or Word the data is stored in in memory models. And there exist serializers and renderers for the content. So the code as such is not hard to understand, as we all know such structures.

Re:The biggest Mistake Today (2)

TuringTest (533084) | more than 2 years ago | (#39320089)

And why would it be a bad practice using a computer to analyze the available data to discover the possible cases that must be covered, and thus understand what the program needs to do? Sure, the final released code should not contain the tests implemented for the first assumptions about data. But understanding the problem is easier if you can run a primary version of the basic program logic on live data.

TL;DW (1)

Gordonjcp (186804) | more than 2 years ago | (#39320025)

It's an hour long, and I don't have the patience to sit through someone gibbering on for that long. Is there a transcript?

Gibberish. And you can quote Riker on that. (1)

HeavyDDuty (2506392) | more than 2 years ago | (#39320031)

Terrible ideas that would transform coding time to 98% on the oooh-aaah-slider and setup for endless runtime tests vs 2% of actual coding. This is worse than time wasted dealing with various Android incompatibilities! Inspirational software engineer gibberish! His approach: a different way is a better way. Sell it. Sell it. Uhm, ok. Let's see some real world exmples of software projects using more than javascript joke code. "It's the specs that Kosinski sent us. In my opinion, sir, they're gibberish."

Reasonable idea, terrible article (3, Informative)

Animats (122034) | more than 2 years ago | (#39320099)

The article completely misses the point. The talk starts out awful, but after about five minutes of blithering, he gets to the point. He's set up an environment where he's running a little Javascript program that draws a picture of a tree with flowers, with mountains in the background. As he edits the code, the picture changes. There's a nice user interface which allows selecting a number in the code and then changing it with a slider. This immediately affects the picture. There's an autocomplete feature which, when code typing has reached a limited number of possible choices, offers options like drawCircle, drawRect, etc. Mousing over the selections changes the picture.

It makes sense if you're drawing pictures with programs. Something like this should be in editors for Renderman shaders and Maya programs, and maybe for some game engines. It also makes sense for HTML editors, and there are HTML editors which do that. Beyond that, it may not be too useful.

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...