Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Swarm — a New Approach To Distributed Computation

timothy posted about 5 years ago | from the or-old-approach-to dept.

Software 80

An anonymous reader writes "Ian Clarke, creator of Freenet, has been working on a new open source project called Swarm. The concept is to allow a computer program to be distributed across multiple computers in a manner almost completely transparent to the programmer. The system observes the program executing and figures out how the workload should be distributed for maximum efficiency. Swarm is implemented in Scala. Its at an early-prototype stage, and Ian has created a good 36 minute video explaining the concept and the current implementation."

Sorry! There are no comments related to the filter you selected.

This'll be great for botnets (0, Insightful)

Anonymous Coward | about 5 years ago | (#29712209)

Just saying it outloud so we can start working on countermeasures.

Re:This'll be great for botnets (2, Funny)

K. S. Kyosuke (729550) | about 5 years ago | (#29712275)

I'm sure you would notice an apparently suspicious huge JVM process eating your CPU time. :]

Re:This'll be great for botnets (4, Insightful)

Darkness404 (1287218) | about 5 years ago | (#29712309)

You know though, most people don't ever check that. They think that over time Windows just "gets slow" because hardware "goes obsolete". So when that happens they think they have to buy a new computer.

Re:This'll be great for botnets (2, Interesting)

NoYob (1630681) | about 5 years ago | (#29712497)

Yeah. Visit a website with an applet and you get the JVM startup and it stays up and running even after you leave the website and visit websites that don't have applets. In other words, I probably wouldn't notice at first either and I'd be chugging along until I restarted my machine and saw the JVM pop-up again for no apparent reason. Other folks who have no idea wtf a JVM is would never notice.

Re:This'll be great for botnets (1)

bencoder (1197139) | about 5 years ago | (#29712719)

That's true to some degree. But computers do slow down as they age. Components damaged by the constant heating cause more errors and therefore require retransmission or error correction, slowing things down.

Re:This'll be great for botnets (2, Informative)

jeisner (56981) | about 5 years ago | (#29712933)

That's true to some degree. But computers do slow down as they age. Components damaged by the constant heating cause more errors and therefore require retransmission or error correction, slowing things down.

My Dell desktop from 1999 has been running like the wind again since last week, when I reverted it to its 2002 state from backup tape. It goes superfast now that it's virus-free, off the network, and running old apps on Windows 98.

I was only trying to recover some old files before junking an unusable machine, but I may keep it around now as a non-networked machine for the kids.

Re:This'll be great for botnets (1)

bencoder (1197139) | about 5 years ago | (#29713201)

I wasn't at all arguing that old hardware becomes unusable, just that the GGP's post seemed to say (not explicitly, granted) that slowdown is only caused by software, which isn't entirely true.

Re:This'll be great for botnets (0)

Anonymous Coward | about 5 years ago | (#29714081)

Agree. Most computers don't use error correction, so there is no slowdown due to the hardware. Even if error correction was being used, the most likely component to fail or degrade is the power supply, not the CPU or memory. Your computer is much more likely to instantaneously exhibit an error rate of 50% (as the power supply explodes in a fireball) than slowly degrade.

Slowdown is more likely to be caused by automatic upgrades dragging in software bloat, and trying to run software intended for a powerful modern CPU on an older CPU. Easily fixed, as the parent did, by restoring the original software configuration and throwing out all the garbage.

While your are at it, blow all the dust out, as the dust can restrict the air flow, causing overheating (and errors), which in turn causes random crashes.

Re:This'll be great for botnets (2, Interesting)

ScrewMaster (602015) | about 5 years ago | (#29714249)

That's true to some degree. But computers do slow down as they age. Components damaged by the constant heating cause more errors and therefore require retransmission or error correction, slowing things down.

No, not really. PCs are nowhere near that sophisticated. A high-speed CPU bus is not like a DSL connection. Pretty much it has to work near-perfectly, or it's blue-screen city.

For example, I have a couple of Athlon 1.4 ghz machines that are running just as fast as the day I built them, and they've never been turned off. Also have an old Thinkpad R41 ... still as fast as it ever was (faster, actually ... I have it running a stripped-down version of XP.) If you have a motherboard or PC that is getting errors due to heating what you're going to see are crashes and lockups, not slowdowns. Personal computers are not mainframes or minicomputers: even with ECC memory they are not fault tolerant to any significant degree, and frankly I think it's a wonder they work as well as they do (Windows issues aside.) When a component starts generating errors your average PC just breaks ... if you're lucky it's just the faulty subsystem, but if you're not the machine is toast.

People's machines slow down because a. they never defrag their hard drives and b. they get infected. It just takes a single badly written piece of malware to turn an otherwise decent machine into a 386, yet users frequently blame the hardware for being too old, as if that somehow explains poor performance. Many people are completely amazed when I clean up their system for them and pack the hard disk. "Wow, it's like a whole new computer!" No, dimbulb, it's the same computer you've always had, you were just too lazy to give it even minimum maintenance. I'm glad I'm not in IT: it's a lot like being a doctor. You have to deal with people who have no ability to think rationally about their problems, and even when you give them good advice they never follow it anyway.

Re:This'll be great for botnets (1)

wvmarle (1070040) | about 5 years ago | (#29716411)

So long story short it is time that we get some proper software running on those computers.

For 99% of the people a computer is an appliance, like the TV and the stereo. They do not get more maintenance other than being dusted off once in a while.

Defragging harddrives: is that still necessary in the Windows world? I stopped doing this more than 15 years ago, at the time running OS/2 and its HPFS.

Getting infected: yes that's an issue and I have honestly no idea how to really prevent this. Even a fully locked down O/S will always allow infections to take place, as long as there is a human factor present.

Computers should be considered low-maintenance appliances by the designers, and hardware and software should be designed and written with that in mind.

Re:This'll be great for botnets (1)

Artifex (18308) | about 5 years ago | (#29717619)

People's machines slow down because a. they never defrag their hard drives and b. they get infected.

You also need to take into account that they may install new service packs and other software to tread water, not to mention adding new bloatware. For example, probably hundreds of thousands of PC desktops and laptops were sold with Windows XP Home and 256MB of RAM, that were not slow at the time. But try running them today with Service Pack 3 and adding antivirus (like AVG) and firewall (like ZoneAlarm) programs, without adding any RAM, and they're terrible. Now consider that many of those same systems also shipped with Microsoft Office 200x, and what the latest service packs for that adds to the load.

Re:This'll be great for botnets (1)

master_p (608214) | about 5 years ago | (#29713323)

That's not really bad. It moves the PC market forward.

Re:This'll be great for botnets (1)

Thing 1 (178996) | about 5 years ago | (#29720169)

That's not really bad. It moves the PC market forward.

Learn the broken window fallacy, [wikipedia.org] please.

Re:This'll be great for botnets (3, Insightful)

BikeHelmet (1437881) | about 5 years ago | (#29713427)

In my experience, Java is not the reason people buy new computers.

Their computers slow down from viruses, or virus-like Antivirus, and then they think they need to upgrade.

Lately commercially made programs (AIM? Windows Live stuff? Most printer software? Most shareware?) seem to consume as much memory as a whole JVM, despite being written in C. This has led me to conclude that companies really don't give a shit how much memory their software uses. This is quite ironically pushing Java closer and closer to C in actual memory and CPU usage.

Disclaimer: I know C is amazing when used properly - but it seems like only small FOSS projects and apps destined for phones have any sort of optimization work done. I've seen daemons use 200KB on a tiny linux handheld, but multiple megabytes is the norm on any desktop.

Re:I'd pick ont the $NtServicePacks (1)

crowne (1375197) | about 5 years ago | (#29724051)

Its the fricken $NtServicePacks in C:\WINDOWS that slows the whole damn thing down. But don't delete them manually, rather run "Disk Cleanup" and let it get sync the corresponding registry entries, so that windows can revert to its last known good configuration ... 3.1 IMO

Re:This'll be great for botnets (2, Funny)

Linker3000 (626634) | about 5 years ago | (#29712513)

Nope, I already use OpenOffice!

Re:This'll be great for botnets (1)

jipn4 (1367823) | about 5 years ago | (#29713713)

I'm sure you would notice an apparently suspicious huge JVM process eating your CPU time. :]

How is that different from any other kind of JVM process?

Earlier (4, Interesting)

WetCat (558132) | about 5 years ago | (#29712243)

.. was Mosix http://www.mosix.org/ [mosix.org]
It allowed mosix-running linux computers to distribute their loads over a connected other mosix-running linux computers.
Processes migrate to other nodes transparently. No programming changes were needed.

Re:Earlier (3, Informative)

K. S. Kyosuke (729550) | about 5 years ago | (#29712293)

And this one works at the application level, across various OSes. No computer repurposing and reinstalling is needed.

Re:Earlier (1)

Sanity (1431) | about 5 years ago | (#29713421)

Very interesting, hadn't seen that before!

One key component of Swarm is that a supervisor process uses a clustering algorithm to determine how data should be distributed such that it minimizes the number of times a continuation must jump between different computers. Does Mosix have any equivalent?

Why has Mosix not achieved wider usage, for example, allowing web applications to scale up using multiple servers?

Re:Earlier (3, Informative)

mrmeval (662166) | about 5 years ago | (#29715937)

It's not free software so you can't use it except for personal or educational use. Open Mosix died
http://openmosix.sourceforge.net/ [sourceforge.net]

Re:Earlier (1)

drinkypoo (153816) | about 5 years ago | (#29717307)

Is it wrong to hope that MOSIX dies, and possibly Frees their code? I want a single system image, where when I bring up a laptop all my computers get faster, but I don't want to be stuck with one kernel and 8 nodes (it's for home use, so I could use it but I'd have to track their kernel.)

Another Earlier - ERLANG! (4, Informative)

mcrbids (148650) | about 5 years ago | (#29714491)

Erlang apparently gets it right. Scales smoothly from single core to multi-core to multi-server in a near linear fashion. Astonishingly reliable, having achieved nine nines of uptime - much less than a second of downtime - in a year. Purposely designed to mitigate shared memory problems. Built for hot-switchover - you can upgrade Erlang problems without closing them first!

In just about every conceivable way, Erlang is the right choice for high-end multi-core multi-system clustered application development. I have a large-stack, clustered application written in PHP. While it works well, there are limits to what we can do within a single process - a problem that's likely to become worse over time as needs continue to scale up. If I were to do it all over again, I'd take a good, hard, look at Erlang.

Re:Another Earlier - ERLANG! (1)

wvmarle (1070040) | about 5 years ago | (#29716447)

Maybe it's time for you to start looking into Erlang (and alternatives) NOW. Not when your demands get so high your current application breaks down. I suspect this is really complex stuff, very hard to get it right lest to understand what it is doing really in the first place.

Re:Another Earlier - ERLANG! (1)

tuomoks (246421) | about 5 years ago | (#29720121)

Correct! I have written distributed systems half of my life (longer than the age of maybe most readers here?), relying mainly on 'C', TAL, even Pascal and assembler because of the company requirements but (just for fun) tried it in Erlang - amazing for so old language!

It has about everything you can think and all that in language! Multiple platforms, own transaction / memory/ whatever databases, can (I tried that!) be used with all main languages, easy syntax, small programs, failsafe, etc, etc.

Used by (huge) corporations in "mission critical" systems - unfortunately sill mostly unknown, maybe because it's free and the world today is looking "commercial" miracle systems. Anyway, Swarm looks a good idea, hope it's successful - it just sounds too heavy, new and faster hw is not a solution but (sometimes) hides the symptoms too well.

Re:Another Earlier - ERLANG! (1)

Mr.Ned (79679) | about 5 years ago | (#29720427)

- Erlang didn't get less than a second of downtime in a year, an application written in Erlang got less than a second of downtime in a year. I bet people clever enough to write such an application in Erlang could have written it in another language. Would it have been more difficult? Probably. But just because you use Erlang doesn't mean that your application is going to magically never going to have downtime - you're still going to have to work hard at it.

- Erlang is not necessarily the right choice for "high-end multi-core multi-system clustered application development". Erlang is not fast at math, and if you have a clustered application that computes fluid dynamics or cracks RC5, you'll probably keep writing it in C or C++ or Fortran because they do math fast. Don't believe me? The n-body benchmark over at the Computer Language Benchmarks Game is all double-float arithmatic, and the Erlang version takes almost six times as long as the version in Common Lisp and almost eight times as long as the version in Fortran.

Re:Earlier (0)

Anonymous Coward | about 5 years ago | (#29716367)

Mosix is alive an kicking, my university is using it all the time

Re:Earlier (1)

Gloria6 (1654873) | about 5 years ago | (#29718903)

Well, there are good news and bad news...

The good news is that Mosix is still alive and well.

The bad news is that although Mosix is excellent for High Performance Computing (HPC), it is totally useless for data-mining and web applications, which is just what Swarm is all about!

Name... Neat idea though (2, Interesting)

Anubis350 (772791) | about 5 years ago | (#29712259)

At first I thought they were talking about Swarm [swarm.org] , a "attempt to gather up many different kinds of models that go under the heading of "agent-based modeling" and create a common language and programming approach." [freefaculty.org] that I've worked with before. I'm surprised they went with the name of an established toolkit in another aspect of programming. Still, looks like a cool tool, another layer of abstraction to make distributed computing easier might make it more attractive to those that don't use it much at the moment.

Re:Name... Neat idea though (1)

Anubis350 (772791) | about 5 years ago | (#29712263)

Gah, /. ate my formatting apparently....

Re:Name... Neat idea though (2, Insightful)

hazem (472289) | about 5 years ago | (#29712771)

I'm just getting into Agent Based Modeling myself and I had exactly the same thought... why would they use the name of an established tool; especially when there are similarities in the concepts. This seems like a recipe for confusion.

A good first check when starting an open project is to check propesedprojectname.org and see if there's anything active there. Or even just Google it - if another project shows up near the top with the same name, it's probably a good idea to pick another name.

I'm sure there are plenty of synonyms for "swarm" that capture the idea, if not an alternate spelling.

But like you said, it does sound like an interesting project.

Re:Name... Neat idea though (2, Informative)

Anonymous Coward | about 5 years ago | (#29713899)

From the FAQ [google.com] :

Did you know that there are other projects called "Swarm"?

Yes we did. We do not believe that 100% uniqueness is a prerequisite for a project name. Remember that the word "swarm" has been in use for over a thousand years, it wasn't invented by any software project!

Our opinion (born of painful past experience) is that it is better to have a good non-unique name than a bad unique name. Of course, if someone can suggest a good unique name, we'll give it serious consideration.

Re:Name... Neat idea though (1)

chuseq (846458) | about 5 years ago | (#29715367)

Here is another program called swarm to submit jobs in a cluster: http://biowulf.nih.gov/apps/swarm.html [nih.gov]

Re:Name... Neat idea though (1)

Hucko (998827) | about 5 years ago | (#29715839)

How about ...

Hurd?

Obligatory (4, Funny)

arcsimm (1084173) | about 5 years ago | (#29712385)

Imagine a Beowulf cluster of... err. Oh.

Re:Obligatory (1)

Dekker3D (989692) | about 5 years ago | (#29713001)

.. of beowulf clusters? you could just group them all into one grendel-cluster!

OT: Already like it for using vimeo (0)

Anonymous Coward | about 5 years ago | (#29712393)

It's so much cleaner, faster, higher quality and better to use than Youtube. Now if only they followed the trend and wrapped that Flash object in a <video> tag.

Re:OT: Already like it for using vimeo (1)

Sanity (1431) | about 5 years ago | (#29713457)

Youtube has a 10 minute limit on videos, this video is 36 minutes.

looks intriguing (5, Insightful)

Trepidity (597) | about 5 years ago | (#29712397)

The thing that's always killed this idea (along with automatic parallelization even on the same machine) is that the overhead of figuring out what's worth distributing, and the additional overhead from mistakes (accidentally distribute trivial computations), often swamps the gains from the multiple processors banging away on it simultaneously. Determining statically what's worth distributing is very hard, since solving it properly is undecidable (basically equivalent to the halting problem), and even solving it in a significant enough subset of cases to be useful has proved difficult. It looks like this project is monitoring dynamically to determine what to distribute, which seems likely to be more fruitful, although historically that approach has suffered from the overhead of the monitoring (like always running your code with debugging instrumentation turned on).

I certainly hope he has a breakthrough vs. past approaches, or it could just be that advances in a lot of areas of technology have given him a better substrate on which to build things that naturally mitigates lots of the problems these things used to have (automatic parallelization research started probably ahead of its time, back in the 1970s, so that most academic stuff was killed off by the 1990s after no really knock-down results emerged). It's not entirely clear to me what the killer advance is, though. The particular variety of portable continuations? A good way of easily monitoring computations? Something that makes the data-dependency analysis particularly easy?

Re:looks intriguing (2, Interesting)

djupedal (584558) | about 5 years ago | (#29712585)

> The thing that's always killed this idea (along with automatic parallelization even on the same machine) is that the overhead of figuring out what's worth distributing

That kind of thinking is so 90's. Brute force data mining, as an example means harvest it all and let target groups sort out what they want. It is a waste of time to 'decide'. That's like stopping to inspect every shovel full of ore as it comes out of the ground. All or nothing has been the default for some time now, and this is just another example.

Re:looks intriguing (-1, Offtopic)

Anonymous Coward | about 5 years ago | (#29712807)

Sorry, but your comment looks like all buzzwords and no meaning.

Re:looks intriguing (0)

Anonymous Coward | about 5 years ago | (#29717451)

sorry, but your meaning looks like all meaning and no buzzwords

Re:looks intriguing (1)

vidarh (309115) | about 5 years ago | (#29716359)

That makes sense when you have enough resources to throw at the problem. But the entire point of this technology is to semi-transparently add multi-server scalability. In that context what you wrote above makes no sense whatsoever - without a scaling mechanism you won't *have* enough resources to throw at the problem. Now, that scaling mechanism, for some problems, is as easy as "divide problem by number of servers and spawn appropriate number of workers to process each sub part", in which case this technology isn't needed. But a large number of interesting problems are *hard* to split up, and may easily have data dependencies that makes naively distributing the input space over multiple servers be slower than not scaling it up at all.

Re:looks intriguing (3, Insightful)

FlyingBishop (1293238) | about 5 years ago | (#29713013)

Depending on how many cores you have access to, distributing trivial computations may not matter. If we ever start seeing 32 core desktop machines, for example, you start to get to the point where forking could create a realtime speedup even though in absolute terms you've wasted 5 times as many cycles.

Re:looks intriguing (1)

Trepidity (597) | about 5 years ago | (#29713043)

It definitely lowers the bar at how good you have to be, but I'm not sure it makes it irrelevant. Just the overhead of putting computations into some sort of container (thunks of some sort) and getting them back out can get absurd if the computations turn out to be, say, smaller than 100 instructions.

Re:looks intriguing (1)

six11 (579) | about 5 years ago | (#29713717)

The appealing thing about this is the problem stems from the way we program. It is really difficult to escape the single-core mindset with the languages we have today. I don't necessarily want to invoke the Wharf hypothesis here, but to some degree our expressive power is limited by the language we are using. Imperative languages that I know lend themselves to single-processor execution. I wonder if new languages specifically designed for multi-core programming would be able to avoid some of the problems you mention (like transaction costs of distributing trivial computation). I really have no idea what they would look like; I suspect part of the problem is the textual (1-D) representation of a C program (or whatever) makes it hard to 'see' where data is where it is processed. (I think this is as much a cognitive science problem as it is a computer science one). A diagrammatic (2-D) language might be better.

Re:looks intriguing (4, Interesting)

david.given (6740) | about 5 years ago | (#29714223)

A friend of mine did a system like this about ten years ago --- hi, Iain! --- called Flit. It had a number of the same features, although using a custom language; it had some rather interesting concepts, such as asynchronous function calls that would return immediately, spawning a new thread, but return a future: a value whose value was not known yet. Accessing the value would cause the thread to be waited upon.

Unfortunately the killer problem that sunk Flit was that of distributed garbage collection. Collecting data over multiple machines is really, really hard, and he never found a usable approach to make it work. I was very disappointed to see that Swarm's garbage collection is still on the to-do list --- he doesn't appear to have started to think about it yet.

I hope he can make Swarm work --- it's something that we could all definitely use. But there are fundamental theoretical problems that have to be solved first...

Re:looks intriguing (1)

simplerThanPossible (1056682) | about 5 years ago | (#29715777)

It's not always true that monitoring is a cost: JIT (Just In Time compilation) monitors execution, and has yielded significant speed-ups in Java.

I conjecture that the key to distributed computing will turn out to be wasting resources (inefficiency) in some way that serves the overall goal.

Re:looks intriguing (1)

vidarh (309115) | about 5 years ago | (#29716377)

I see what you're trying to say, but claiming monitoring doesn't have a cost is blatantly false - gathering the monitoring data takes a non-zero amount of resources. You're right that monitoring can often be a *net benefit* though.

Sounds good (2, Interesting)

cwire4 (831206) | about 5 years ago | (#29712461)

It sounds like a good idea, but I don't think the project is far enough along in this video to warrant a posting. Maybe he was using too much of a trivial example to be appreciated in the video, but his explicitly offloading the task to another computer doesn't appear to be very far beyond standard client server models. If it were already automatically transporting processing between different nodes, it'd be much cooler, but that is not a trivial problem to solve. Deciding what should and what shouldn't be distributed at the application level will be extremely hard I imagine. If the project were farther along in its maturity I'd be much more interested.

We have just witnessed the birth of a new meme... (3, Funny)

Linker3000 (626634) | about 5 years ago | (#29712477)

In Ian Clarke's Swarm, World "Hellos" you!

I doesn't do much yet (4, Informative)

svick (1158077) | about 5 years ago | (#29712617)

If I understand what he says correctly, it is something like this: Distributing computation is hard, really hard. It's so hard that nobody ever did it properly. But Swarm will change this! How? Well, we don't know yet, there are so many interresting problems we have to solve first. And you can help!

Re:I doesn't do much yet (4, Insightful)

Anonymous Coward | about 5 years ago | (#29713015)

Mod parent up. This is exactly what Ian did with Freenet.

He cobbled together an overly-simplistic prototype to address a set of very difficult unsolved problems in anonymous communication and then farmed out the actual real-world legwork on those problems to interested open source developers while Ian himself effectively abandoned Freenet for other (paying) gigs. To this day he is credited, somewhat ironically, as "the creator of Freenet," and a decade later the Freenet project still hasn't solved the problems it set out to solve, even after changing the fundamental network architecture several times.

Great career strategy though. Get credit for the shiny things and pass the shame of failure off on others. He's CEO material all the way.

Re:I doesn't do much yet (-1, Troll)

Anonymous Coward | about 5 years ago | (#29713503)

Yeah, how dare he share his ideas with the world for free, what an asshole.

He should have done diddly-squat with his life like you have.

Re:I doesn't do much yet (0)

Anonymous Coward | about 5 years ago | (#29715103)

He cobbled together an overly-simplistic prototype

Isn't "simplistic" assumed with a "prototype"?

then farmed out the actual real-world legwork on those problems to interested open source developers

Otherwise known as "starting an open source project". Seriously, you're criticizing the fact that he asked the open source community for help, and they responded?

while Ian himself effectively abandoned Freenet for other (paying) gigs

Ian abandoned Freenet? Someone needs to tell the Freenet community that. And he did it for paying gigs? How utterly evil of him to try to earn a living.

To this day he is credited, somewhat ironically, as "the creator of Freenet,"

Ironically? Why is that? His 1999 paper still forms the basis for Freenet's routing algorithm today.

a decade later the Freenet project still hasn't solved the problems it set out to solve

Democracy and freedom of speech for all? No, you are right, but the ACLU hasn't achieved its goals yet either after more than a century.

even after changing the fundamental network architecture several times

Yeah, how dare they not come up with the perfect design on day one.

Great career strategy though. Get credit for the shiny things and pass the shame of failure off on others.

What shame and what failure?

He's CEO material all the way.

You sound bitter, did a CEO steal your girlfriend or something?

Re:I doesn't do much yet (2, Interesting)

Hobbex (41473) | about 5 years ago | (#29717487)

I don't think this characterization is fair, and I think you would have a hard time finding somebody who actually worked on Freenet to agree with you. Ian's orginal technical ideas for Freenet - as well as his vision - are very much still a big part of the architecture, and he could never be said to have abandoned it. In fact, time has vindicated many of his ideas to a far greater extent than I expected when we started working with them. You are right that the project has not yet solved the problems it set out to solve - but since it has wildly high ambitions, that should hardly be surprising. I think it has made a positive contribution all the same, if only to our understanding of many of the issues involved.

It is true that the press has had a tendency to paint Ian as the lone father of the project, but that is just the way to press works, and I have never seen Ian taking credit for other peoples work. And, to be honest, after you have done it a few times, you start realizing that dealing with the press isn't nearly as fun as it is cracked up to be, and that Ian has a knack for communication that most nerds, myself included, do not. I think Freenet has been very well served by Ian's ability to effectively communicate it's goals and gain attention -- among other things it has allowed several coders, of whom I was the first but not the last, to work full time for the project for certain periods. That said, I was a bit disappointed when the NYTimes ran a cover story on a presentation Ian and I held at Defcon and forgot to mention me at all, but I got over it :-).

Re:I doesn't do much yet (1)

Concern (819622) | about 5 years ago | (#29720033)

I agree. That said, I actually sat through the entire video, and sadly, it was a disappointment. I ignore everything that he did that came before. He has great intelligence, a wonderful idea, some vague but intriguing suggestions on how to accomplish it, and very little working code.

At least he is up front about this - if by up front, you count a litany of near-impossible problems you do not even have an idea about how to solve in the last 8 minutes of your 35 minute video. :) I do hope for the death of the fad of using videos where a perfectly good blog post will do.

In truth I think he is on to something, or at least, that problems of parallelism will be successfully attacked in ways similar to what he suggests. I can't really cast blame on anyone for being excited by an idea or building a tiny prototype that does something cool. But I would personally be way too embarrassed to say "hey guys look at this" with so little meaningful work done.

Re:I doesn't do much yet (1)

Neoncow (802085) | about 5 years ago | (#29724291)

I do hope for the death of the fad of using videos where a perfectly good blog post will do.

I thought that's what slashdot comments were for?

Re:I doesn't do much yet (1, Insightful)

Anonymous Coward | about 5 years ago | (#29713607)

"Writing free operating systems is hard, really hard. Its so hard that nobody ever did it properly. But Linux will change this! How? Well, I've produced some code that works, but there is a lot left to do, and you can help!" - Linus around 1991 (paraphrased)

Clarke never said he doesn't know how to solve the remaining problems (of which, he freely admits in the video, there are many). Would you prefer that no open source project was released to the world until it was 100% finished? Good luck with that.

Re:I doesn't do much yet (1)

svick (1158077) | about 5 years ago | (#29714537)

I just felt from the summary that he already had something to show. And in the quite long video, he shows that he already has written some code, but it doesn't do anything interresting (yet). I expected more. Not perfect, 100% finished project. But at least something.

36 minutes? (1)

bennomatic (691188) | about 5 years ago | (#29712819)

So instead of R-ing TFA, I have to WTFV? Sigh.

Umm.... (0)

Anonymous Coward | about 5 years ago | (#29713169)

The system observes the program executing and figures out how the workload should be distributed for maximum efficiency.

... this is unsolvable.

We cant even do this (efficiently) for an app running on a single core. How do they expect to do it when latencies are higher and throughput is lower?

Can they determine if the program will terminate or loop forever too?

Re:Umm.... (1)

Sanity (1431) | about 5 years ago | (#29713631)

Its unsolvable to do it in advance, but quite possible to do it while observing the running code (a bit like how a filesystem optimizes the locations of data on disk).

Alpha code... (3, Funny)

Linker3000 (626634) | about 5 years ago | (#29713171)

Computer 1: MOV AL...what? No more? MOV AL what? I need a value! WTF am I supposed to do with that!?

Computer 2: 09? Nine? Who gave me nine on its own. That doesn't make any sense! Jeez! Hey, anyone out there missing some data?

Computer 3: Not me, I'm pushing the registers onto the stack

Computer 4: Nope, I've got an INT

Computer 5: Oh, hey, it could be me - does NOP have a value. No? Sorry, my bad!

Computer 1: Nine - yeah, nine - Well, I could stick that in AL if no-one else wants it!?

Computer 3: Oh, heck, give it to 1. I've just got a POP instruction so I am going to obliterate it anyway.....

Prey (0)

Anonymous Coward | about 5 years ago | (#29713681)

Did anyone else read the summary and think of Michael Crichton's Prey, and how this might apply (plus the use of the word 'swarm')?

Yet another language. (1)

nurb432 (527695) | about 5 years ago | (#29714021)

I do respect Ian, but cant we do this with the existing language infrastructure and just extend it?

Re:Yet another language. (1)

DetpackJump (1219130) | about 5 years ago | (#29717553)

with the existing language infrastructure and just extend it?

That's exactly what Scala does. It "extends" Java in a sense and still runs on the JVM. Swarm is essentially a creative new way of using a mature platform, assuming it works at some point.

VMware doing it (1)

invisik (227250) | about 5 years ago | (#29714159)

Isn't that what the new vSphere or some up-and-coming release from VMware supposed to do?

-m

Fix Freenet First (1)

xquark (649804) | about 5 years ago | (#29714391)

I think he should fix the monstrosity that is Freenet before he jumps onto other things.

Freenet (0)

Anonymous Coward | about 5 years ago | (#29715235)

Freenet is a horrible failure. The only thing on there is porn which loads slower than using a dial-up connection, and the "websites" which actually *will* load are random. Freenet is a horrible piece of trash, nobody should care what other garbage this guy creates. I've asked him to please stop with this shit, but he wouldn't listen to me.

Distributed what now? (1)

st0rmshad0w (412661) | about 5 years ago | (#29715361)

I read that as "distributed copulation" for some reason. I need more sleep.

Big claims, belied by reality? (1)

unixtechie (1370847) | about 5 years ago | (#29715629)

"Ian Clark of the Freenet fame".. Actually, practically no claim about Freenet came true. The authors advertised "anonymity" etc. etc. at the same time as university professors published studies of statistics about the snooped connections: to any node present on the network for some time it is elementary to collect IPs.
It was painful to see so many users completely duped by the untrue claims, which their authors knew pretty well were untrue (and of which fact one-word admissions can be found buried somewhere in the wiki on their site).

Ian Clark and his collaborator knew nothing of the concept of the Small World (a type of graph that naturally grows in case of such networks), and therefore were not aware of the conditions (i.e. parameters that have to get set for the connecting nodes in their network) needed to make this network self-sustaining, and when pointed to the concept, they chose models by Newman, a prolific publisher of those arising from computer-simulated abstractions, rather than Barabashi (I'm afraid I misspelt his name), who offers much more realistic and practical ideas about this kind of topological network structure.

People do not change, really.
So what I'd expect from this announcement is a repetition of the story with Freenet: a real and interesting problem, inflated claims, and no actual solution, just claims and "development" for years to come.

So I remain a pessimist.

Re:Big claims, belied by reality? (0)

Anonymous Coward | about 5 years ago | (#29718007)

You don't know what you are talking about.

The authors advertised "anonymity" etc. etc. at the same time as university professors published studies of statistics about the snooped connections: to any node present on the network for some time it is elementary to collect IPs.

Citation? There have been papers published claiming to have found flaws in Freenet. Sometimes they were simply wrong, on other occasions they had a point and Freenet was modified to prevent the exploit.

Ian Clark and his collaborator knew nothing of the concept of the Small World

Now you really reveal that you don't know what you are talking about. If you actually care about understanding Freenet, you should read Oskar Sandberg's thesis [freenetproject.org] which explains in great depth the connection between Freenet's routing algorithm and small world networks.

GPU's (1)

WarJolt (990309) | about 5 years ago | (#29716515)

Is there a potential to use this on a GPU? The current problem with GPU programming seems to be solved with swarm.

New? (0)

Anonymous Coward | about 5 years ago | (#29716537)

Maybe the article needs correction because "swarm" on its own is fairly mature (as in old) concept in distributed computing.

SWARM is good (1)

BhaKi (1316335) | about 5 years ago | (#29717491)

I used SWARM an year ago and I was impressed by the possibilities it offers. It was also pretty stable. I'm sure it would have achieved a very reliable level of stability by now.

whatever happened to aglets? (1)

speculatrix (678524) | about 5 years ago | (#29718811)

way way back, IBM did some stuff with Java and agents... http://wapedia.mobi/en/Aglets [wapedia.mobi]

Prey (0)

Anonymous Coward | about 5 years ago | (#29720507)

Did no one read "Prey". Give multiple computers the ability to work together today and tomorrow we'll have a bunch of computers trying to take over the world. As if grandma doesn't have enough problems with the computer when it really does start acting on its own.

Hmm. Sounds like Amoeba. (1)

Niet3sche (534663) | about 5 years ago | (#29730125)

I've not RTFA'd yet, but on first blush, this sounds suspiciously like the Amoeba project/work/files. The net result of Amoeba was that you'd end up with a large virtual machine, comprised of many individual machines scattered across different sites. How is this different?
Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?