Announcing: Slashdot Deals - Explore geek apps, games, gadgets and more. (what is this?)

Thank you!

We are sorry to see you leave - Beta is different and we value the time you took to try it out. Before you decide to go, please take a look at some value-adds for Beta and learn more about it. Thank you for reading Slashdot, and for making the site better!

No More Coding From Scratch?

Cliff posted more than 8 years ago | from the digging-through-the-automation dept.


Susan Elliott Sim asks: "In the science fiction novel, 'A Deepness in the Sky,' Vernor Vinge described a future where software is created by 'programmer archaeologists' who search archives for existing pieces of code, contextualize them, and combine them into new applications. So much complexity and automation has been built into code that it is simply infeasible to build from scratch. While this seems like the ultimate code reuse fantasy (or nightmare), we think it's starting to happen with the availability of Open Source software. We have observed a number of projects where software development is driven by the identification, selection, and combination of working software systems. More often than not, these constituent parts are Open Source software systems and typically not designed to be used as components. These parts are then made to interoperate through wrappers and glue code. We think this trend is a harbinger of things to come. What do you think? How prevalent is this approach to new software development? What do software developers think about their projects being used in such a way?"

Sorry! There are no comments related to the filter you selected.

Windows bugs? (0)

Anonymous Coward | more than 8 years ago | (#16719645)

The immortal blue screen of death, following windows code down through the ages.

Are we in hell?

Already been done (0)

Anonymous Coward | more than 8 years ago | (#16719649)

It's the UNIX philosophy - take tools, glue them together with a little shell scripting, and viola (sic)

Re:Already been done (0)

Anonymous Coward | more than 8 years ago | (#16719751)

Viola? [wikipedia.org]

Re:Already been done (0)

Anonymous Coward | more than 8 years ago | (#16719889)

(sic)? [wikipedia.org]

Re:Already been done (0)

Anonymous Coward | more than 8 years ago | (#16720035)

Healthy. [wikipedia.org]

Re:Already been done (0)

Anonymous Coward | more than 8 years ago | (#16720135)

not everything (1)

crankshot999 (975406) | more than 8 years ago | (#16719653)

microsoft will never go open source, the only code they reused in vista was the bugs!

Re:not everything (0, Troll)

Orion Blastar (457579) | more than 8 years ago | (#16720311)

Yeah Windows XP bugs makes up 90% of the source code. The other 10% was stolen^H^H^H^H^H^H borrowed from DEC VMS and *BSD Unix source code.

Good Idea (3, Funny)

fatman22 (574039) | more than 8 years ago | (#16719677)

Code recycling, and that word was chosen specifically for this case, is a time-honored and proven concept. One just has to look at Microsoft's products to see that.

Re:Good Idea (0)

Anonymous Coward | more than 8 years ago | (#16720037)

Code recycling, and that word was chosen specifically for this case, is a time-honored and proven concept. One just has to look at Microsoft's products to see that.
Thanks for the good laugh^_^

Oh my (2, Insightful)

Lord Duran (834815) | more than 8 years ago | (#16719681)

If you think debugging is a pain NOW...

We could call these parts... (1)

Threni (635302) | more than 8 years ago | (#16719685)

..."Objects"! Perhaps have a framework to enable the interconnections of these objects, perhaps entitled the Common Object Model. I think you're on to something! Quick - base an operating system around it! It would probably rapidly become the worlds most popular one!

Re:We could call these parts... (1)

Chaoticmass (213593) | more than 8 years ago | (#16719903)

No, we should call it component object model instead!

Re:We could call these parts... (1)

Threni (635302) | more than 8 years ago | (#16720089)

> No, we should call it component object model instead!

That's a great idea! (I should take my .NET hat off when talking about old Microsoft technology...)

Re:We could call these parts... (1)

AndrewNeo (979708) | more than 8 years ago | (#16720327)

Why bother? After you go through the trouble of making up this operating system, you'll get the idea that you should make an entirely new system where the parts are called "assemblies" and they're linked together with references.. Maybe you could name the system after a TLD again, too. .ORG or .GOV or something.

That'll be the day (1)

Curien (267780) | more than 8 years ago | (#16719699)

I predict that this will happen shortly after all composers simply re-arrange themes and phrases from previous musical pieces in order to create any new compositions and authors simply splice paragraphs from existing books to form new ones.

Re:That'll be the day (1)

CRCulver (715279) | more than 8 years ago | (#16719867)

I predict that this will happen shortly after all composers simply re-arrange themes and phrases from previous musical pieces in order to create any new compositions...

Already been done at least as far back as 1968. The third movement of Berio's Sinfonia [amazon.com] is just a cut-up version of a movement from a Mahler symphony with quotations from fourteen other composers interspliced. Schnittke's Concerti Grossi are based heavily on Bach with most of the original content being only the arrangement. Heck, even Saariaho has written new works with the same musical content as a previous once (Pres, Petals, Nymphea Reflection).

and authors simply splice paragraphs from existing books to form new ones.

This is already done with romance novels. There is software that can write them for you.

Re:That'll be the day (1)

Korin43 (881732) | more than 8 years ago | (#16720027)

So you mean that this will be sufficient for old programs that no one cares about and porn?

This saddens me (1)

Curien (267780) | more than 8 years ago | (#16720263)

It's truly disturbing to me how someone who put so much effort into writing a response could have completely failed to understand simple words like "all" and "any".

Re:This saddens me (1)

CRCulver (715279) | more than 8 years ago | (#16720361)

No, I understood him just fine, I just clicked "Submit" too quickly. What I wanted to write further is that, in spite of the insistence of some Slashbots that coding is an artform, people who code for aesthetic reasons are a minority. Most of the time people just want to get their work done and cash their paycheck. Therefore, it's silly to compare an aesthetic field, such as music, with a functional field, such as programming, even if sometimes there are exceptions. I think it was worth pointing out that many composers have reused material to save time and fulfill a commision just like coders rely on libraries.

Re:That'll be the day (2, Funny)

B3ryllium (571199) | more than 8 years ago | (#16720013)

It's called "Pop Music". Or, more specifically, "Nickelback". :)

Isn't that the point? (1)

Tony (765) | more than 8 years ago | (#16719703)

More often than not, these constituent parts are Open Source software systems and typically not designed to be used as components. These parts are then made to interoperate through wrappers and glue code.

Isn't that the point of Free software? That it is flexible, that we have the source code so we *can* make it do just what we want, rather than be limited by the original authors?

What's the point of coding from scratch if you don't have to?

Re:Isn't that the point? (1)

msobkow (48369) | more than 8 years ago | (#16719929)

Distributed systems are designed for component integration from day one.

Has everyone been sleeping?

Re:Isn't that the point? (1)

bmnic01 (999586) | more than 8 years ago | (#16720039)

Yep ... that is the point. Vinge has some interesting and entertaining ideas and "logical derivations" of ideas that he explores across a few books -- see his latest -- and he's an excellent writer. But the idea of component assembly goes back a ways.

Vernor Vinge is an idiot. (0)

Anonymous Coward | more than 8 years ago | (#16719709)

Why do geeks idolize authors who have no idea about how the real world works?

Re:Vernor Vinge is an idiot. (1)

petrus4 (213815) | more than 8 years ago | (#16719945)

Because he has some cool ideas. Sure, they might have zero practicality or association with the real world...but they're still cool. ;)

Re:Vernor Vinge is an idiot. (0)

Anonymous Coward | more than 8 years ago | (#16720293)

Also, Vernor Vinge is a computer science professor. One reason his books are popular with geeks is because he actually does borrow very heavily from the real computer science world.

Of course, some jackass will now say "computer science, pah, that's baloney, not real at all". Of course, he'd be saying it on the fucking internet, a product of early computer science (you try making an internet without routing algorithms...), so he really would be a total jackass.

Did we ever really code from scratch? (1, Insightful)

Anonymous Coward | more than 8 years ago | (#16719713)

Machine language maybe. But if you are using the for loops, arrays, stacks, conditions, or whatever of your high-level language, you are already reusing code at that point. We just keep taking the theft to higher and higher levels. All good art starts with theft.

Re:Did we ever really code from scratch? (1, Insightful)

Anonymous Coward | more than 8 years ago | (#16720065)

Its not theft when the people are intentionally creating the programming language for you to use and giving the consent for you to use it.

Furthermore, its not "All good art starts with theft" its "Good artists copy, great artists steal".

Re:Did we ever really code from scratch? (1)

heinousjay (683506) | more than 8 years ago | (#16720163)

Wow, you thieves sure get defensive when you're caught.

It depends on the application (2, Interesting)

technoextreme (885694) | more than 8 years ago | (#16719723)

I certainly do not want to be developing the same common hardware implementation for a FPGA over and over and over again. Give me something the code for something that works and let me get to the stuff that is unique to the project.

Pay Me. (1)

KermodeBear (738243) | more than 8 years ago | (#16719731)

What do software developers think about their projects being used in such a way?
Fine with me as long as they pay the licensing fees.

Has anyone else found (1)

majutsu (1018766) | more than 8 years ago | (#16719759)

code reuse works best in languages like lisp and not so much in C/C++ family? I'm saying this mostly because in lisp I can drop down to almost any section of code instantly (most of the time, not always) while C/C++ IDEs tracking down a bug on large codebases can be a day long job frequently.

In the other languages, I often feel like treading in a minefield when reusing anything less than proven APIs or functions, let alone entire codebases.

Elephant graveyard of failed software (0, Flamebait)

zitintheass (1005533) | more than 8 years ago | (#16719763)

Elephant graveyard of failed software [sourceforge.org] and you may try your luck there, but real programmmers code they own code. Only losers rip.

Turtles all the way down... (1)

AlXtreme (223728) | more than 8 years ago | (#16719775)

If a piece of open source software is used as a component often enough, people will turn it into a component (through an API, plugins, software packages, modules etc). This improves the overall design of the software and allows better code reuse, but more importantly reduces maintainance issues by allowing developers to upgrade components with relative ease. I see this as a "Good Thing(tm)".

I read it and.. (0)

Anonymous Coward | more than 8 years ago | (#16719777)

think Visual Studio!

Ridiculous. (0, Troll)

Max Threshold (540114) | more than 8 years ago | (#16719783)

This idea is being pushed by suits who think it will cut costs and increase profits. And it probably will, in the short term... as long as your customers lower their expectations instead of jumping ship. Welcome to the future of unmaintainable garbage software. (Then again, if you've been running Windows you probably won't notice the difference.)

Re:Ridiculous. (1)

innosent (618233) | more than 8 years ago | (#16720029)

No, this is common practice, because it often just makes sense. Maintaining a set of common components means that when you do reuse a piece, you can maintain it once, and release the updated component to all of the affected software. So if you have 30 different customer applications that all use the same iText [lowagie.com] based report generators, you only have to fix a bug in one place, or add a new feature (like a new type of chart) in one place, and you can release the update for all 30 applications (and satisfy your 30 maintenance contracts). I have a set of my own components and FOSS components (like iText), that I reuse for various common features, and I'm sure the big companies (CA for example) do the same.

Re:Ridiculous. (1)

rm999 (775449) | more than 8 years ago | (#16720061)

"Welcome to the future of unmaintainable garbage software."

That's a strong statement. I disagree - I think a strong component library will actually improve maintainability of software by an incredible amount. Software Engineering is the last major engineering field to realize a useful and standardized component model. Electronics would be nowhere without IC's. Mechanical engineering would be nowhere if people had to design motors and engines seperately for each application.

All this increases maintanilbity by decreasing debugging time when something goes wrong.

Re:Ridiculous. (1)

Max Threshold (540114) | more than 8 years ago | (#16720237)

Sounds great, in theory. But in the real world, most code (other people's code... always other people's) isn't that well modularized and can't be reused without introducing lots of bugs to an application. Every hour saved writing code will cost you three in debugging and re-writing.

Re:Ridiculous. (0)

Anonymous Coward | more than 8 years ago | (#16720297)

Windows XP works very well. You should stop posting. You're worthless.

Just ain't going to happen (1)

syousef (465911) | more than 8 years ago | (#16719785)

Nice piece of science fiction but the reality is there are orders of magnitude more things that can be done writing new pieces of code from scratch than just combining existing ones. People paying for the software tend to make sometimes ridiculous demands on customisation of everything from look and feel to algorithms for doing the actual work for everything from transaction processing, science or graphics. Therefore new snippets will continually need to be coded.

What we're talking about here is the reuse of libraries. It's actually good practice to find a library and use a tested piece of code if you're doing something, than to go out and re-write it from scratch. My experience is that most developers are bad at searching for and reusing code from their own projects let alone from some vast archive (aka the Internet). The tools and processes in place for finding code that you can trust and that does what you want are so poor that at present we're far from even this level of reuse. Hell coders keep re-writing string handling routines that exist aplenty. But where do I find test cases and results for the existing libraries that I can examine. The quality of the libraries varies widely by language and area of functionality. If I trust the libraries I end up writing code that doesn't work and cop flak for it.

For example nothing wrong with Java String handling, but have you ever tried to use the standard date parsing routines. Hint: Even with lenient=false it does try to guess what you've meant to enter if you enter something that's close to being a real date. If you want to reject incorrect input, you've got to write it yourself. This isn't an excercise in Java bashing - every language has areas of weakness in the libraries - but it does explain why programmers often prefer to write it themselves. Even if they do decide to reuse, if they're not careful to retest the algorithms, they are taking big risks in terms of getting working functionality at the end of the day.

A Fire Upon the Deep (1)

colaco (616922) | more than 8 years ago | (#16719791)

I'm pretty sure that the novel of Vernor Ving that talks about this issue is "A Fire Upon the Deep" [wikipedia.org] and not the "Deepness in the Sky". The later is a prequel of the former. "A Fire Upon the Deep" was also one of the first science fiction novels to portrait Usenet.

Re:A Fire Upon the Deep (1)

realmolo (574068) | more than 8 years ago | (#16719863)

Well, there is a lot more talk about "code re-use", and programming in general, in "A Deepness in the Sky". Sytem software is something of a major part of the plot.

"A Fire Upon the Deep" doesn't go into nearly as much detail about such things.

Incidentally, I thought "A Fire Upon the Deep" was a much better book, partially because it doesn't obsess over the details like "Deepness" does.

Re:A Fire Upon the Deep (1)

Digi-John (692918) | more than 8 years ago | (#16719907)

"Fire" is a much better action book, in my opinion, but I believe that "Deepness" is the overall superior book *because* of the obsession over details you mention. Now, if you were going to suggest which book someone should read first, "Fire" is definitely the one.

Re:A Fire Upon the Deep (1)

Digi-John (692918) | more than 8 years ago | (#16719893)

I'm pretty sure that it is Deepness that details Pham Nuwen's work in code archaeology. I suppose this would have something to do with Slow Zoners not having AI to handle code for them, the way the Beyonders of "A Fire Upon the Deep" would.

Re:A Fire Upon the Deep (0)

Anonymous Coward | more than 8 years ago | (#16719913)

I don't think A Fire Upon the Deep mentions program "archaeology" as such. A Deepness Upon the Sky goes into detail about Pham's early training as a programmer-archaeologist; I think FUtD only refers to him as a "programmer-at-arms".

Re:A Fire Upon the Deep (0)

Anonymous Coward | more than 8 years ago | (#16720381)

I'm pretty sure that the novel of Vernor Ving that talks about this issue is "A Fire Upon the Deep" and not the "Deepness in the Sky".

You are also pretty wrong.

Service Oriented Architecture (3, Informative)

bigtallmofo (695287) | more than 8 years ago | (#16719795)

The promise of SOA says that you won't have to do this. If you believe in that promise then anyone that develops projects in the future will create them in discrete elements that are accessible as a web service. If you want to start a new development project, just utilize those services that you need and ignore the ones you don't. Because the functionality is encapsulated (and therefore, written, debugged and tested) within the service you're good to go.

I see application projects in the future acting like glue that holds many services like this together and makes them more than the sum of their parts.

Re:Service Oriented Architecture (0)

Anonymous Coward | more than 8 years ago | (#16720289)

That's the key. Using/orchestrating web services and such is where it's at. I think we're slowly getting there. We'll need some solid versioning systems (old apps using web services that have been updated and now work differently) and such things. Hopefully SOA knowledge will get more popular (you don't find highly skilled SOA engineers everywhere and they won't work for a dime either), more books/better education for it, better software to make it all happen, etc. Web services are a true godsend for interoperability (totally platform and language independent). But overall, I think we're getting there. Windows Communication Foundation [WCF a.k.a. Indigo] is looking very, very good too.

bullshit (0)

Anonymous Coward | more than 8 years ago | (#16720331)

okay.. this might work for your standard issue make-monkey-press-button-to-do-repetative-task
business drone software.. but I don't think you'll see any dynamically resized 3D model of a tank
or something as complicated hooking into a com.buzzword.WebServiceAPI over http for each calculation..

we moved beyond vax hardware speeds for a reason.. and sometimes that reason is not so you
can bloat your network with spurious XML-over-HTTP traffic

That book... (1)

fm6 (162816) | more than 8 years ago | (#16719797)

...though basically well written, is full of dumb ideas. This particular dumb idea comes from the notion that the development tools we use now are pretty much as sophisticated as they're ever going to get. That's right up there with the famous urban legend about the proposal to close the patent office.

Vinge's problem is that he makes too much of the famous failures of AI, and has fallen in with the camp that believes that software will never be able to compete with wetware. That has yet to be proven, but even if its true, simply replicating human intelligence (so-called "hard AI") is not the only strategy.

Re:That book... (0)

Anonymous Coward | more than 8 years ago | (#16720041)

This particular dumb idea comes from the notion that the development tools we use now are pretty much as sophisticated as they're ever going to get. [...] Vinge's problem is that he makes too much of the famous failures of AI, and has fallen in with the camp that believes that software will never be able to compete with wetware.

Vinge's premise is not that AI is fundamentally impossible — transhuman intelligence is certainly possible in the Transcend. He just thought it would be a cute idea if AI never pans out on Earth, because the laws of physics here prohibit it. It's science fiction, you know. He doesn't actually believe that the Earth is located in the middle of a Slow Zone in the galaxy, nor does he believe that strong AI is impossible in real life. This is the guy who wrote that essay on the technological singularity, remember?

Re:That book... (1)

IICV (652597) | more than 8 years ago | (#16720197)

Vinge's problem is that he makes too much of the famous failures of AI, and has fallen in with the camp that believes that software will never be able to compete with wetware.

Do you realize you're accusing an author of being wrong about what happens in the world he created? It's like telling Tolkien "No no no, elves are short and happy and wear pointy hats and work for Santa, they're not elegant beings who have been around since the dawn of time".

Just so you know, since I don't remember if it's actually in that novel, one of the basises of the Zones of Thought scenario is that it becomes easier to do nifty ultra-high-tech stuff the further out from the galactic core you go - and as I recall, Deepness in the Sky takes place in a region of space where the best AI you can make is retarded moron level or worse.

And also, I'm pretty sure that for Vinge, software will never "compete" with wetware. They'll just merge - look up The Cookie Monster, Fast Times at Fairmount High, True Names, and apparently also Rainbow's End though I have not read that yet.

Re:That book... (1)

fm6 (162816) | more than 8 years ago | (#16720459)

Oh please. Creating an imaginary past based on current mythology is not the same thing as speculating about a possible future based on current technology.

Components for Linux, anyone? (1)

Amid60 (938961) | more than 8 years ago | (#16719799)

This type of coding had been going on in the Windows world for many years (10?), using glue languages like Visual Basic and off-the-shelf "components". The hackers in the past seemed to deride the folks using this technology. The real code reuse will really take off under Linux only after a unified component architecture is established, similar to much-maligned, but very useful ActiveX on Windows. This plumbing should belong to the OS, at the very least to enforce just one architecture (Linus seems to keep good lid on the option proliferation). Please don't point to many component architectures already available in the open source world - the very fact of multiple choices makes component pretty much useless as anyone who tried to port a non-trivial program from Gnome to KDE knows well.

We're already there (2, Insightful)

Zepalesque (468881) | more than 8 years ago | (#16719809)

As the developer of freely available software, I find the prospect of people using my code a mixed bag. Partially, I feel an ownership of the code I write and am somewhat offended by the idea of people using it . However, as a paid engineer, I go through this at regular intervals; older projects get handed to others for support, as I work on new components.

On the other hand, I welcome the idea that my free code would be used by others - it is a flattering prospect, I suppose.

Others profit from this sort of re-use: COM, CORBA, Jars, etc...

Practicalities (1)

IanBevan (213109) | more than 8 years ago | (#16719811)

A key barrier to code reuse is that technology marches on. Many advances render older code either obsolete or, at best, cumbersome. Recent examples might, for example, include adding templates to .NET, which rendered vast tracts of our .NET 1.1 internal class libraries obsolete.

Layers of Abstraction (1)

thorholiday (970488) | more than 8 years ago | (#16719839)

Isn't this why we have layers of abstraction, such as high-level programming languages and external libraries?

Sure, if someone who only knew assembler from 20 years ago saw the GUIs present today, they might agree with the article. Imagine if you had to write a Windows application in pure assembler. It would be ridiculous! But we have languages like C# or the Windows API for C/C++ to aid with these kinds of things.

As we progress through time, programmers will simply use higher and higher level languages.

Re:Layers of Abstraction (1)

Tony Hoyle (11698) | more than 8 years ago | (#16720391)

Writing a windows application in pure assembler isn't *that* hard... it's worth doing as an excercise sometime.

Writing a complex one would probably be impractical due to time constraints rather than difficulty.

Well... (1)

kitsunewarlock (971818) | more than 8 years ago | (#16719843)

Unless we move out of 64-bit architecture, right? I mean, we'll have to by Sunday, December 4, 292,277,026,596 or else we'll have something slightly similiar to Y2K (or Y2038). But really, aren't a lot of virus' and spybots program this way? I wouldn't know; the only thing I could do related to reprogramming a virus or spybot would be renaming the exe file that the dumb@$$ has to open to get his system infected.

!News (1)

iamacat (583406) | more than 8 years ago | (#16719859)

Few people write their own C compilers, libc, math libraries, HTTP protocol handlers and so on. If other/bigger components are available, of course they will be reused.

That sounds very familiar... (1)

DM78 (1022835) | more than 8 years ago | (#16719873)

I've been working on a MediaWiki modification that details every portion of the function/object/application creation process. The idea was that I could put a function or object together using pre-existing chunks in the wiki and then the end result will automatically be fully commented and based on code that has already been tested and proven.

No more buffer overflow exploits, no more null pointer dereferences, and no more need for 30+ libraries that have duplicate or similar code in them... or so the thought goes.

So far, I have a few entries targetted for the 6502 processor and the script that generates the end result code is mostly working... sort of. heh. Eventually, I'll fill it out with more useful stuff and then make it public, but until then, it's just a hobby to pass the time.

bullshit (1)

timmarhy (659436) | more than 8 years ago | (#16719909)

various "experts" have been talking this nonsense for years. it's never going to happen.

Reality (1)

octopus72 (936841) | more than 8 years ago | (#16719911)

Idea about using decades old logic doesn't work. Someone has to mantain all that code: kernel, basic libraries, platform API's etc.

Problem is, software gets old, obsoleted by new ideas and when next-gen of hardware takes over, so updates are needed all the time. If you don't pay attention, you can end up with cruft like X Windows protocol which people are now trying to fix and improve to compete with other popular OS's.

Unlike in this novel, actual (best) code search won't need digging, it will be advertised, categorised, documented, just ready to be used.

No way Jose. (3, Insightful)

140Mandak262Jamuna (970587) | more than 8 years ago | (#16719935)

Yeah, all European Space Agency was trying to do was to use the Ariane 4 code in Ariane 5. And the rocket blew up 40 seconds [umn.edu] after the launch. Why? Ariane 5 flies faster than Ariane 4 and hence it has larger lateral velocity. The main software thought the readings were too high and marked lateral velocity sensors as "failed". All four of them. Then without sensors all the computers shut down. The vehicle blew up. But by that time some bean counter had already shown millions of francs in savings, claimed credit for specifying ADA language for flight control software, collected his bonus.

Some basic tasks like file io or token processing and such minor things might be reused. But even then porting something so simple like a string tokenizer written for a full fledged virtual memory OS like Unix/WinXP to a fixed memory handheld device is highly non trivial, especially if you want to handle multi-byte i18n char streams

If the author sells what he was smoking while coming up with the article, he stands to make tons of money.

Re:No way Jose. (2, Insightful)

StrawberryFrog (67065) | more than 8 years ago | (#16720339)

all European Space Agency was trying to do was to use the Ariane 4 code in Ariane 5. And the rocket blew up 40 seconds after the launch. Why? Ariane 5 flies faster than Ariane 4 .. the main software thought the readings were too high and marked lateral velocity sensors as "failed"

You claim that this rocket failure is due to software reuse. That just sounds wrong to me. I don't think that not starting from scratch is that relevant. I could more convincingly argue that the failure is due to the software not being tested with the input values that it would receive during operational use. That is important, be the code new or old; and when a failure costs millions, not doing so is inexcusable.

Resistance is futile (1)

HangingChad (677530) | more than 8 years ago | (#16719937)

Your code will be assimilated.

High(er) level languages anyone? (1)

kotku (249450) | more than 8 years ago | (#16719949)

The article is a bit of nonsense really. It ignores the fact that building software is a vertical problem anyway where most often you pick the highest level tool to get the job done. You now have

Digital Logic
Machine Code
C Code Family
Dynamic Languages / Visual Languages
What next...

In 20 years time nobody will be pissing around with C code or Java or or Lisp ( ok maybe lisp) except for historical/maintaince reasons. There will be new higher level constructs leveraging streamlined minimal lower level constructs. Many of the problems defined today by large code bases will be rewritten using less effort and more sophisticated expressive tools. In 20 years time there will be 2^20 cores on a chip, perhapps. I doubt code bits from today will solve those sorts of problems.

My slashdot captcha was "pervert" huh!

Re:High(er) level languages anyone? (2, Funny)

ResidntGeek (772730) | more than 8 years ago | (#16720269)

In 20 years time nobody will be pissing around with C code or Java or or Lisp ( ok maybe lisp) except for historical/maintaince reasons.
That's right. Why, just think of 20 years ago. Today we program in C and C++, and beginners use BASIC. Not at all like people did it 20 years ago.

code reuse (1)

zacs (12785) | more than 8 years ago | (#16719951)

This new concept fascinates me. We can take these individual blocks of code. Let's call them "libraries", and join them together with some kind of programmatic glue and potentially reuse them to make some kind of application. This is definitely cutting edge thought. I'm not sure it will catch on though.

Congrats, you just described my job (1)

Shados (741919) | more than 8 years ago | (#16719959)

Vernor Vinge described a future where software is created by 'programmer archaeologists' who search archives for existing pieces of code, contextualize them, and combine them into new applications.
Yeah, thats the present for a lot of us (most of us even?

Unless you're working in R&D or making things that don't exist (a lot of game engines that are from scratch, a lot of open source projects, new protocole implementation, etc), thats probably what you do 40 hours a week. I know a lot of people for example who have to work with web services, and for a reason or another have to do contract first (because they do things their toolkit, if they even use one, doesn't handle), they often copy paste a WSDL template and modify it. A lot of database management tools have a librairy of template stored procedures. Googling for code snippets and samples is always good...

Especialy with environments like Java or .NET, you're almost always sure that someone else did something before you, and did it better. Copy and paste their code, modify it, integrate it. Its what I do all day (always making sure to understand how everything works though!!). That allows me to do a lot of things I have no experience in. I'm sure i'm not the only one.

Software Architect (1)

Barleymashers (643146) | more than 8 years ago | (#16719967)

I was in a training class for Rational Software Architecture, I was 'told' that some company's have set it up and augmented the rational suite so that they can get about 85% of generated code before a developer has to touch it. There claim was the architect will be the main driver behind future development and the key cog in the enterprise.

jpg flaw/bug (1)

drfrog (145882) | more than 8 years ago | (#16720009)

isnt code reuse the cause of that major flaw?

no one questioned the code for years

suggested in 1986 or so (0)

Anonymous Coward | more than 8 years ago | (#16720011)

I tried to get RCA to move toward a system like this back in 1985 or 1986 but there was too much protection of each group's work to overcome; managers did not want to release "their" code to others even for use inside the one company.

It's called "A Library"...duh! (1)

sbaker (47485) | more than 8 years ago | (#16720047)

What?!? Why is this news? We have libraries - those are the big chunks of useful code - packaged and (hopefully) documented for re-use and widely distributed.

When you design new applications you look for libraries that do the bulk of the work - and the application becomes mostly 'glue' to hold the libraries together. Scripting languages are the epitome of this - where large piles of carefully written and optimised libraries make up for the overhead of interpreting the actual library code.

Dunno why anyone finds this surprising - it's what we've been doing almost since the dawn of programming.

The tricky part is noticing when you can't find a suitable library - so rather than dumping a bunch of code into the application, design it as a reusable library. There is a small overhead to doing this - but rarely more than (say) 5% of the development cost. So when writing OpenSourced code this is generally done well. The hard part is in commercial code where you have someone with a chequebook and a severe lack of foresight breathing down your neck who would rather save 5% now than save 30% on the next project.

Reminded me when I checked the NWN 2 Toolset (1)

Jugalator (259273) | more than 8 years ago | (#16720073)

The Neverwinter Nights 2 Toolset was redone in .NET, but I checked the various components, and it at least used these:
- RAD Game Tools' Bink library
- RAD Game Tools' Granny library
- RAD Game Tools' Miles Sound System
- Crownwood Software's DotNetMagic 2005
- D. Turner, R. Wilhelm, W. Lemberg -- FreeType font rasterizer
- Glacial Components Software's GlacialTreeList
- Mike Krueger's (of ICSharpCode) SharpZLib library
- Divelements Limited's SandBar
- Various libraries done by Sano
- Various libraries done by Quantum Whale
- Davide Icardi's (of DevAge.com) Source / SourceGrid libraries
- Matthew Hall's XPExplorerBar
- Zlib

These were third party work used in the toolset application, and could be more as well (just checked standalone libraries, not any statically linked stuff).

It happens on BigIron too (1)

Fookin (652988) | more than 8 years ago | (#16720087)

I was speaking to someone at work last week and this exact topic came up. We have 5000+ batch jobs written in JCL and whenever someone needs to write a new program, they just mine through what's already out there and use it for the new stuff.

It's not that uncommon at my workplace ...

deadline = shortcuts (1)

PovRayMan (31900) | more than 8 years ago | (#16720099)

With companies like EA forcing insanely short deadlines to get Generic Sports Game 2007 out by the holidays, it pretty much forces their code slaves to reuse mostly everything in its' horrific state. If end product quality matters then "from scratch" programming will have a larger priority.

Good in Theory (1)

CyberLife (63954) | more than 8 years ago | (#16720117)

The notion of complete code-reuse and never coding anything from scratch is a very good idea in theory. The problem is that in order for it to work the components from which things are built have to be reliable and trustworthy, and such things are not always easy to find.

I've been studying errors and defects in engineering (both computers and otherwise) for many years. I was raised by an accident investigator, so I have an innate understanding of why things fail and what people can do to avoid it. The core reason for anything to fail is the assumption of correctness. In all accidents and failures, one can always find somebody somewhere that made an assumption about a component's behavior. The fact they did not know its true behavior caused them to employ the component when they should not have. Thus, the other elements of the system that depended on its assumed behavior also fail, and it just cascades from there.

In software, such assumptions of behavior often take the form of (to name a few) failing to detect null-pointers, failing to enforce buffer limits, or failing to enforce proper protocol operation. Opening the source to the world can help to detect such assumptions, but someone still has to check. As overkill as it may seem, I've been known to write unit tests for third-party libraries to ensure they behave as their documentation says. Many times I've found them to be faulty. Had I not tested them and just assumed their correctness, my product would not have worked. Even though my code may have been correct and the author of the libraries were at fault for their code not working right, I too would have been in error for making a false assumption.

Until such time as we can trust the components upon which we base our work, we cannot employ 100% code reuse. To reach such a stage, we need a radical shift in our approach to engineering. We need to ensure that our products do as they're supposed to and not just assume. Unfortunately, the rush to market often prevents us from doing what we should, and that's a shame. First to market is not always best to market.

Re-using Software Systems is Very Common (1)

this great guy (922511) | more than 8 years ago | (#16720137)

You may not realize it, but as of today, virtually all new applications being developed re-use existing working software systems. For example, when you develop a new web application, the components that you re-use are: the OS, HTTP server, database server, scripting language (PHP...), etc. Of course nobody is re-developing everything from scratch. Very often only the higher level software layer (ie. your application) is developed partially or mostly from scratch, but everything else (ie. the internal or lower level components) are re-used from existing projects.

code reuse is pretty common (1)

k1ttyk4te (1022845) | more than 8 years ago | (#16720151)

This has been a useful and common method of software development for a long time - at least as long as I've been programming professional, and we were taught it in school, of course, though we rarely did it, since the point was to learn how to do it ourselves before using other's code. Probably 85% of my new development work is tying components that either I wrote or someone else did into new and interesting configurations. The stuff I use (typically compiled code written by other programmers) was in itself designed from custom libraries of things other programmers wrote, and so on and back. There are pieces of legacy code that date back at least 15 years in our custom libraries. Code reuse makes designing and making useful software faster, more standardized and a lot easier to understand. Sure, sometimes you want to do it yourself and that's good, but really? The chances that your print routine or sort are going to look substantially different from anyone else's are low. Of course, none of that applies to research programming,as far as I can tell.

Standards (1)

Millenniumman (924859) | more than 8 years ago | (#16720173)

One of the problems is the necessity for glue code. There needs to standards for software libraries. You can have different libraries, but certain functions should always get you something that will work in the context. One should be able to take out any library from any piece of software, and replace it with another library (assuming there is more than one for the same purpose).

Not until more code is public domain. (1)

Mike McTernan (260224) | more than 8 years ago | (#16720191)

While its easy to grab code and quickly put it together to make something that works, it is usually the licences and the difficult to interpret legalese that can limit this approach. This becomes relevant if distribution of the resulting software is required, and is especially relevant in a commercial context. Clean-room implementations are going to be around for a while to come, IMHO.

PHBs wet-dream (1)

KillerCow (213458) | more than 8 years ago | (#16720209)

This has been the PHBs wet-dream since programming began. They see writing programs as assembly. It isn't assembly. It's design. You can't automate design.

They always talk about making generic components that can just be "glued" together to create a functioning application. You can't. That "glue" is the business logic, and it's what your program does. It's what you are paying the developers to do.

If you have good developers (skilled, not monkeys) they will create (or use) libraries to do most of the heavy lifting, but they still need to put those libraries together in a way that solves a business problem. If you could just automatically glue pieces together, how can you provide a useful product for your customers? Sure, you can take an "email system" and a "social networking" system and "glue" them together, but to do what exactly? Solve some business problem perhaps? How will you "glue" them? By developing business logic.

It's like taking two different ideas / products, placing them in a room together, and expecting "synergy" to create something new and great. Even if you have some great idea about how those two things can be combined to create something, you still need someone to do the design work of actually combining them.

You wouldn't expect an air conditioner and a storage locker to just magically combine together to create a modern refrigerator, would you? You need someone smart to integrate them. That is what a developer does.

Move along, nothing to see here (1)

thatjavaguy (306073) | more than 8 years ago | (#16720253)

We've heard it all before. Next story please.

small language, big class library (1)

StrawberryFrog (67065) | more than 8 years ago | (#16720267)

Look at recently developed programming environments that have made it big: Java and C#, running on a virtual machine, using a large class library.

My first impression of Java was of "a small language with a big class library" - I don't mean that in a bad way, the relatively small number of syntactic elements in the actual language and handing of things like say, threading to classes is a good thing.

But coding from scratch? What's that supposed to mean - typing ones and zeros at a OS-less motherboard? Working with all this support is the present and future of coding. Working without it is becoming a niche - for people who make the VMS, Oss, device drivers and class libraries that the rest of us depend upon. And yes, this trend for richer class libraries will continue. Duh.

It's about respect for the role of the programmer (1)

Dilbert48 (130913) | more than 8 years ago | (#16720271)

We're all leveraged when we can build on the good ideas and good code of others. The danger, in some organizations, is when they believe they can hire low-skill or non-programmers to put the pre-designed components together. As an old-timer, I've watched software development over the years, and there is a continual tendancy of enterprise managers to conceptualize programmers as low-cost, commodity labor. One company reorganized their development group out of existence, conceiving of this work to be done by vendors and rare consultants. The result is that they are frequently in a state of chrisis from their systems failing and from having nobody on staff with the skill to diagnose or fix them. Managers need to know that they need highly skilled developers, and they need to pay them what they are worth. These people can intelligently build on the work of others in a way that works for the unique problems of their business.

great author, wrong book (0)

Anonymous Coward | more than 8 years ago | (#16720287)

You mean A Fire Upon the Deep (http://en.wikipedia.org/wiki/A_Fire_Upon_the_Deep [wikipedia.org] ). Deepness is a great book, but information complexity (and AI) is much more in Fire's area.

The future is ... ten years ago. (1)

pedantic bore (740196) | more than 8 years ago | (#16720295)

I haven't written a program longer than about 2,000 lines from scratch in more than ten years. Nobody I know has, either. It's all about adding another feature, or removing another bug, from giant ecosystems of software that grow like moss over the hardware.

Makes me nostalgic for the days when I used to start with an idea and create the design from there. But even then, it was almost always the case that everything I built was built on top of, or in the context of, another software system. (well, there was this one time when I wrote my own assembler so I could write my own boot ROM for a machine you've never heard of... but that's not exactly normal)

Geesh, It's been happening for 50+ years (1)

davidwr (791652) | more than 8 years ago | (#16720321)

Ever since the first person reused existing code.

This is just another step along the same continuum.

I wrote a small perl program the other day. If you count all the code that executed when I ran it, "my" code was only an itsy-bitsy-teen-weeny part of the whole.

"Move along now, nothing for you to see here."

Unrealistic (1)

onetwofour (977057) | more than 8 years ago | (#16720365)

Whilst this is an uber fantasy for minimizing cost and development time, it will never exist. You just have to look at the amount of bugs that exist in software, they will never really vanish and if we're not writing bespoke software they are likely to double. The text talks about architecture within the project being what will replace the hard work, however software architects exist only in an a project at the start. It's rare to bring them in at the end and can you imagine the hell you'd go through having to debug some alien code if it's not working? The people who make up architects in my University are just the people who are trying to make money for Computer Science, I pretty much see them as a lower form of a professional. The real Computer Scientists are those who bring new concepts, not those who spend all their time planning. I accept we need people from other disciplines to enhance our working practices but it should never be viewed that these people will take over the core subject. For example we all know Computer Science was started from a bunch of crazy Mathematicians (Just look at Babbage and how he was publically ridiculed because he didn't complete his thinking engine), surely someone could also abstract from that time that we won't eventually need Mathematicians because Computer Scientists will still be needed. But we still have the crazy Mathematicians as they are still required for the absolute core stuff. From those principles let's honestly look at how businesses work, You'd never get such a repository put together of lots of clever code. A 100% bug free module for a specific task would cost more than the moon. (About £8.8x10^22 in Cheese Pounds)

OhGodPleaseNo (2, Interesting)

cperciva (102828) | more than 8 years ago | (#16720369)

Code re-use is a great idea. Free software is a great idea. Taken together, and to an extreme, they can cause problems, particularly where security is concerned. What happens when someone finds a security flaw? How can you contact the people who are reusing your code if you have no idea who they are?

To take a personal example, my delta compression code [daemonology.net] , which I wrote for FreeBSD, is now being used by Apple and Mozilla to distribute updates; I've talked to their developers, and if I find a security flaw in this code (very unlikely, considering how simple it is), I'll be able to inform them and make sure they get the fix. On the other hand, I know developers from several Linux distributions have been looking at using my code, but I'm not sure if they're actually using it; and searching on Google reveals dozens of other people who are using (or at very least talking about using) this code.

Putting together software by scavenging code from all over the Internet is like eating out of a garbage dump: Some of what you get is good and some of what you get is bad; but when there's a nation-wide recall of contaminated spinach, you'll have no idea if what you're eating is going to kill you.

Simple matter of (1)

Archfeld (6757) | more than 8 years ago | (#16720395)

economics...That is, until the hardware demands recoding, reuse is both financially and programatically a generally sound philosophy. Of course GIGO still applies :)

It won't happen UNTIL...(testability framework) (1)

Bluedove (93417) | more than 8 years ago | (#16720415)

It won't happen that way until there is a coherent established framework for functional testing of the "building block" code. It is GREAT to reuse code, and I've worked on a few projects that became GPL due to my incorporation of GPL code. However, most professional projects I've worked on require more on the reliablity front so no time (time == $$money$$) is lost. Any reused/mined code incorporated into these projects has had to be so rigorously inspected and tested for function that when combined with the time spent on banging together glue code, we may as well have started from scratch for those code chunks.

The exception to this is tiny code chunks that either have a simple testable API (eg. no functions that return values variable with time), libraries/code that have been so widely used that you can "mostly" assume that it's valid (eg. zlib), or occasionaly some banged together prototype "proof of concept" app.

Until some standard framework is created that the code can be tested for validity by those incorporating the code, it likely won't happen fast (other than for the exceptions listed above).

For the record, I don't know what that standard framework may be. Personally, i'm in the habit of whenever i write a class, i include a static test function in the class that can be called to put it through its paces and check for validity on all functions and expected data alignments. (eg. on some arm processors i've worked on, a struct is packed based on a 4-byte boundary, and sometimes extra "surprise" filler bytes are inserted. Lots of fun to track those down when you're not expecting them.)

90% of anything is crap (1)

PhiRatE (39645) | more than 8 years ago | (#16720421)

The problem with this concept is deceptively simple:

1. The most difficult part of programming is developing software that does a good job of unanticipated uses
2. Most programmers are fine with something that just does the job they need it to do, and no more
3. Relatively few useful software components can operate in isolation, the rest place requirements on the environment

The consequence of this is that contrary to the theory that writing something once should be sufficient, the truth of the matter is that in the vast majority of practical cases, it is simpler to write the code once again, bespoke for your particular needs and environment, than it is to pick up the best available pre-written component to do it.

The essence of the problem is one of assumptions. Any component has an environment, this is the sum of state and information it requires in order to be able to operate. The component also (probably), has an API. This is the formal environment, the sum o state and information that it makes explicitly available to the programmer.

The problem that you have is that there is no "right" place for the percentage of the environment to be formalised into an API. For all those of you who are thinking "Well, the answer is obvious, everything!", imagine a component that manipulates dates.

Think about this for a moment, I use it because it's an example which almost every language I have encountered has done "poorly" (where poorly is the situation in which I find myself implementing something I felt should have been there by default, like adding or subtracting a month).

The environment for a date component, the true environment, is larger than any date component I have ever seen has exposed. For a start, which calendar are you working in? what timezone? what are the DST considerations for that timezone? is there some mechanism for updating these DST settings for those crazy timezones that change DST times all the time? is the begining of the week Sunday or Monday?, what is the current locale? If I add a month to the 31st of October, is it now the 30th of November, or the 1st of December? what if I subtract a month? is month addition associative?

Keep in mind that every question there has been solved at some point or another, and the majority of it is solved by the few really good date libraries out there (perl has a particularly fantastic one the name of which escapes me), but I've never seen one that did a good job of updating its daylight savings information, the majority of the *nix style ones simply assume that the operating system has good information, normally the catalog that came with your most recently install of glibc.

Anyway, back to the point, the issue is simply one of appropriateness. There are some components that make a lot of assumptions, but their interface is simple. There are others that make very few assumptions, but working with them is mind bogglingly complicated. Anyone who has ever had to work with mapserver or equivalent mapping problems will understand the gradient. But it is not at all uncommon to find yourself in a situation, even with something as well covered as dates, where the conventional components don't cover the part of the environment you need covered. Interfaces are either too big and complex, too small and simple, or just focused in a different area (how many components have been dropped because they were perfect, except that they didn't handle multithreading?).

In the end, rewriting this stuff is going to be a fact of life for a long time yet (unless we get really smart AIs or a revolution in programming I can't foresee), although hopefully better coverage of more common problems will continue to be a feature (it's much easier to do things today than it was 10 years ago basically because of this, regardless of what the lisp guys say :)

The future is here (1)

nicolaiplum (169077) | more than 8 years ago | (#16720451)

and it's called CPAN.

Expert Systems and AI (1)

camperdave (969942) | more than 8 years ago | (#16720473)

I think that we are not going to be "mining" for code so much as using smarter and smarter tools to develop the code in the first place. Historically, we used to code by flipping bits on the bare metal. Later we developed smarter tools called assemblers and loaders to flip the bits for us. Then we developed "high" level languages like Fortran, Pascal and C, and developed tools to compile our programs. Our tools became smarter. We don't write the code for GUIs anymore. We drag and drop widgets in a design software, click a button, and reams of code get written for us. Object oriented programming could take us farther still, as the GUI elements could already be pre-defined. (Take a card game for example. All the methods for manipulating and displaying cards, and stacks of cards, etc, are already all in place All the programmer has to do is code the rules of the game.)

As we progress in abstracting the data we deal with, we remove ourselves from raw coding. Once we start abstracting and manipulating programs as data, we will enter a realm where we will be able to do stuff like:

Programmer:"Give me a klondike solitaire program, but extend it to a crown and anchor deck".
Computer:Klondike Solitaire found.
Computer: Unknown term 'crown and anchor deck' in context Klondike Solitaire.
Computer: Unknown term 'crown and anchor deck' in context card games.
Computer: There is a dice game with crown and anchor symbols.
Programmer: "Create card deck variant crown and anchor deck by adding a suit with symbol crown, and by adding a suit with symbol anchor."
Computer: Card deck variant crown and anchor deck created. Shall I make this card deck variant publicly available?
Programmer: "Yes"
Computer: Klondike Solitaire Crown and Anchor variant ready.

Cost (effort) / Performance (1)

ishmalius (153450) | more than 8 years ago | (#16720475)

Often it is desirable to reuse someone else's library. But often it is still very valuable to provide the code yourself, to prevent another dependency on someone else's code, and also avoid something that might limit portability.

I think it is the same cost/performance threshold that people use in everyday life, in making a decision to purchase or "do it yourself." Is using a third-party library worth the hassle of integrating and packaging it?

For example, one developer wanted to add a default dependency on the enormous Boost C++ library to our project in order to provide a chart-drawing ability. Was it worth it to us to add such a burden? No. Would it be beneficial to an app whose purpose is to draw charts? Probably.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?