Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Comments

top

Yahoo Ends Talks With Microsoft, Embraces Google Instead

MOBE2001 Yahoo Needs Neither Microsoft Not Google (214 comments)

Jerry Yang did the right thing, in my opinion. He must hold on to his core search and advertising business. However, he would do well to diversify as soon as possible. Consider that both Microsoft and Intel have been riding on last century's legacy technology (x86 and Windows) for a long time. That sweet ride can't last forever. Now that the industry is transitioning from sequential processing to massively parallel computing, and given that neither Microsoft nor Intel have delivered on the real promise of multicore processors, Yahoo has the opportunity of a lifetime to sneak behind those two slow-moving behemoths and steal their pot of gold. Someone should tell Jerry before it's too late. Multicore processors is where the real action is at. Whoever solves the parallel-programming/multicore-design problem will rule the computer industry in this century.

more than 6 years ago
top

Are Academic Journals Obsolete?

MOBE2001 Peer Review is Elitism (317 comments)

The only purpose of peer review is not quality control but control, period. It is a mechanism used by an elitist group to keep outsiders at bay. Thus science becomes immune to public scrutiny, not a very good thing. As Paul Feyerabend said, (paraphrasing) we did not get rid of the dictatorship of the one true religion to fall under the tyranny of another.

Peer review is an incestuous process that works for a while but eventually engenders ridiculously hideous monsters. Examples are time travel, cats that are both dead and alive when nobody is looking, parallel universes, dimensions that curled up into little balls so tiny as to be unobservable, etc... This is the reason that Feyerabend wrote in Against Method that "the most stupid procedures and the most laughable results in their domain are surrounded with an aura of excellence. It is time to cut them down in size and give them a more modest position in society."

The good news is that the internet is quickly making old style peer review obsolete. Like it or not, the entire world is our peer. If you got something good to offer, fight like hell to promote it and, if it's any good, the world will acknowledge your effort and compensate you accordingly. Just come out into the playground and show us what you got.

Most peer reviewed scientific papers are boring crap anyway. Your worth should not be how many papers you've published but what have you done that is useful?

more than 6 years ago

Submissions

top

Multithreading Joins Gates as Yesterday's Man?

MOBE2001 MOBE2001 writes  |  more than 6 years ago

MOBE2001 writes "Excerpted from The Register:

[...] Multiple processor architectures introduce a new class of programming problem. Writing software to get the best performance from multiple-processor systems is far from straightforward. Issues such as synchronization, load balancing, memory protection and task distribution place new demands on programmers and those building tools that are used by developers.

Chip builders have concentrated on the use of multi threading. Intel, for example, has invested heavily in multi-threading technology with its thread building blocks (TBB) library extensions to C++. But the validity of multi threading is under attack. Veteran programmer Knuth said in a recent interview that multithreading may not be up to the task and could fail. As such, he is "unhappy" with the current trend towards multi-core architectures.

[...]Knuth — the author of the seminal programmers' manual The Art of Computer Programming and a Turing Award winner — has to be taken seriously on this. And he is not alone. Sun Microsystems' director of web technologies Tim Bray, one of the team that created XML, has also criticized multi threading. Bray said that, while he once favored the approach, he had now turned away from it. Elsewhere the criticism of multi threading is even more direct.
"

Link to Original Source
top

Single Threading Considered Harmful

MOBE2001 MOBE2001 writes  |  more than 6 years ago

MOBE2001 (263700) writes "There has been a lot of talk lately about how the use of multiple concurrent threads is considered harmful by a growing number of experts. I think the problem is much deeper than that. What many fail to realize is that multithreading is the direct evolutionary outcome of single threading. Whether running singly or concurrently with other threads, a thread is still a thread. In my writings on the software crisis, I argue that the thread concept is the root cause of every ill that ails computing, from the chronic problems of unreliability and low productivity to the current parallel programming crisis. Obviously, if a single thread is bad, multiple concurrent threads will make things worse. Fortunately, there is a way to design and program computers that does not involve the use of threads at all. See Parallel Computing: Why the Future Is Non-Algorithmic for the full article."
Link to Original Source
top

Sorry, Still No Time Travel

MOBE2001 MOBE2001 writes  |  more than 6 years ago

MOBE2001 (263700) writes "The bad news is that time does not change. Spatial velocity is given as dx/dt. Velocity in time(dt/dt) is nonsensical. As simple as that. In other words, no time travel to the past or the future, no motion in space-time, no wormholes and no hanky-panky with your great, great, great grandmother. There is only the changing present, aka the NOW. This is the reason that Sir Karl Popper called spacetime, "Einstein's block universe in which nothing ever happens" (Conjectures and Refutations). The good news is that distance is an illusion and we'll be able to travel instantly from anywhere to anywhere."
Link to Original Source
top

Why Parallel Programming Is So Hard

MOBE2001 MOBE2001 writes  |  more than 6 years ago

MOBE2001 writes "The human brain is a super parallel signal-processing machine and, as such, it is perfectly suited to the concurrent processing of huge numbers of parallel streams of sensory and proprioceptive signals. So why is it that we find parallel programming so hard? I will argue that the reason is not because the human brain finds it hard to think in parallel, but because what passes for parallel programming is not parallel programming in the first place. Switch to a true parallel programming environment and the problem will disappear. Read the rest of the article."
Link to Original Source
top

Panic in Multicore Land

MOBE2001 MOBE2001 writes  |  more than 6 years ago

MOBE2001 writes "There is widespread disagreement among experts on how best to design and program multicore processors. Some, like senior AMD fellow, Chuck Moore, believe that the industry should move to a new model based on a multiplicity of cores optimized for various tasks. Others (e.g., Anant Agarwal, CTO of Tilera Corporation) disagree on the ground that heterogeneous processors would be too hard to program. Some see multithreading as the best way for most applications to take advantage of parallel hardware. Others (Agarwal) consider threads to be evil. The only emerging consensus seems to be that multicore computing is facing a major crisis. [...] In a recent EETIMES article titled "Multicore puts screws to parallel-programming models", AMD's Chuck Moore is reported to have said that "the industry is in a little bit of a panic about how to program multicore processors, especially heterogeneous ones." Full article: Nightmare on Core Street."
Link to Original Source
top

MOBE2001 MOBE2001 writes  |  more than 8 years ago

MOBE2001 writes "Nobody likes to be told that they are wrong but we are. It all started over 150 years ago when Lady Ada Lovelace wrote the first algorithm (table of instructions) for Charles Babbage's analytical engine. We have been doing it wrong ever since. Why? Because we still haven't grasped the true hidden nature of computing. Sooner or later we will and, when that happens, there is no limit to how complex our future software systems will be. This will usher in the second computer revolution, one which will make the first one pale in comparison."

Journals

top

How to Design a Fine-Grain Self-Balancing Multicore CPU

MOBE2001 MOBE2001 writes  |  more than 7 years ago

The most important question a CPU designer must ask himself or herself is, what is the purpose of the CPU? Most people in the business will immediately answer that the purpose of the CPU is to execute sequences of coded instructions. Sorry, this is the wrong answer. This definition is precisely what got us in the mess that we are in. Read full article.

top

COSA vs. Erlang

MOBE2001 MOBE2001 writes  |  more than 7 years ago

The functional programming language Erlang is rightfully touted by its supporters as being fault-tolerant. COSA shares all the fault tolerance qualities of Erlang but this is where the similarities end. The COSA philosophy is that nothing should fail, period. There are software applications where safety is so critical that not even extreme reliability is good enough. In such cases, unless a program is guaranteed 100% reliable, it must be considered defective and should not be deployed. That's the main goal of project COSA: 100% reliability, guaranteed.

top

Jeff Han and the Future of Parallel Programming

MOBE2001 MOBE2001 writes  |  more than 7 years ago

Forget computer languages and keyboards. I have seen the future of computer programming and this is it. The computer industry is on the verge of a new revolution. The old algorithmic software model has reached the end of its usefulness and is about to be replaced; kicking and screaming, if need be. Programming is going parallel and Jeff Han's multi-touch screen interface technology is going to help make it happen. The more I think about it, the more I am convinced that Han's technology is the perfect interface for the COSA programming model. COSA is about plug-compatible objects connected to other plug-compatible objects. Just drag 'em and drop 'em. What better way is there to compose, manipulate and move objects around than Han's touch screen interface?

top

Intel's Whining

MOBE2001 MOBE2001 writes  |  more than 7 years ago

According to a recent c/net article, Intel fellow Shekhar Borkar is reported to have said that "software has to double the amount of parallelism that it can support every two years." This is so infuriating. That's not the problem with software. The nastiest problem in the computer industry is not speed but software unreliability. Unreliability imposes an upper limit on the complexity of our systems and keeps development costs high. We could all be riding in self-driving vehicles (and prevent over 40,000 fatal accidents every year in the US alone) right now but concerns over safety, reliability and costs will not allow it. We have been using the same approach to software/hardware construction for close to 150 years, ever since Lady Ada Lovelace wrote the first algorithm for Babbage's analytical engine. The old ways of doing things don't work so well anymore.

The industry is ripe for a revolution. The market is screaming for it. And what the market wants, the market will get. It is time for a non-algorithmic, synchronous approach. That's what Project COSA is about. Intel would not be complaining about software not being up to par with their soon-to-be obsolete CPUs (ahahaha...) if they would only get off their asses and revolutionize the way we write software and provide revolutionary new CPUS for the new paradigm. Maybe AMD will get the message.

top

Why We Need a New Computer Revolution

MOBE2001 MOBE2001 writes  |  more than 7 years ago

Unreliability imposes an upper limit on the complexity of our software systems. We could conceivably be riding in self-driving vehicles right now but concerns over reliability, safety and high development costs will not allow it. As a result, over 40,000 people die every year in traffic accidents. Something must be done. Unfortunately, the computer industry is still using the same algorithmic computing model that Charles Babbage and Lady Ada Lovelace pioneered close to 150 years ago. This would not be so bad except that the algorithmic model is the main reason that software is so unreliable and so hard to develop. It is time to question the wisdom of the gods of computer science and switch to a new computing model, a non-algorithmic, synchronous model. It is time for a new revolution. There is no avoiding it. The market is screaming for it. And what the market wants, the market will get. This is what Project COSA is about.

Having seen first hand the inertia and hostility of the western computer industry and computer science community toward any suggestion that there may be a better way of doing things, I have concluded that the new revolution cannot come from the West. They have placed their computer pioneers on a pedestal and nobody dares question the wisdom of their gods. India and China, on the other hand, don't have that problem. They have nothing to lose and everything to gain. They have been on the tail end of the first computer revolution from the beginning but now they are in a position to leapfrog the western advantage and become the leader of the second revolution.

top

Who Will Lead the Next Computer Revolution, East or West?

MOBE2001 MOBE2001 writes  |  more than 7 years ago

Will it be China or India? Or will it be Europe or the US? I am putting my money on either China or India and here is why. The West has become severely handicapped by complacency and conceit. This is largely due to their having been at the forefront of the first computer revolution from the beginning. They are so immersed in and so drunk with the success of their own paradigm, they cannot imagine another paradigm replacing it. They have placed their famous scientists (Alan Turing, Fred Brooks, John von Neumann, etc...) on a pedestal. Nobody dares question the wisdom of the gods for fear of being ridiculed. As a result, nothing really new has emerged in more than half a century of computing. The approach to building computers is still based the old von Neumann architecture which is itself based on the algorithmic software model, a model that is at least 140 years old (Charles Babbage and Lady Ada Lovelace). Intel, IBM and AMD and the others are not doing research on truly new cpu architectures. Why should they? They're not in the business of inventing new computing paradigms. They are tool makers. They just produce processors that are optimized as much as possible for the current model. They have no choice but to continue to improve on the old von Neumann model by adding more speed, less energy consumption, more transistors, etc...

I think the West has forced itself into a dangerous situation. The reason is that, while this is going on, the computer industry is suffering terribly from a chronic malady called unreliability. Their own scientists (e.g., Fred Brooks) are convinced that the problem is here to stay. As bad as it already is, the real cost of unreliability goes deeper than it appears on the surface. Consider that over 40,000 people die every year in the US alone as a result of traffic accidents. The solution is obvious: people should not be driving automobiles. That is to say, all vehicles should be self-driving. However, building driverless vehicles is out of the question because concerns over reliability, safety and cost have imposed an upper limit on the complexity of our current software systems. On the military and political front, there is a desperate need to automate the battle field as much as possible in order to minimize human casualties and appease the voters back home.

The western world is thus stuck between a rock and a hard place. On the one hand, they have a really nasty problem sitting on their lap and it keeps getting worse. On the other hand, they have a bunch of aging gurus with a firm grip on the accepted paradigm, telling them that the problem cannot be fixed. This is where the East may want to capitalize on and profit from the West's self-imposed mental paralysis, in my opinion. What if there were another paradigm that solved the reliability problem at the cost of beheading some of the demi-gods of western computer science? Should the East care? I don't think so. Is it their gods that would be sacrificed? No. Does not the West look down on them as being mere copycats? Yes. Are they not the technological maids hired by the West to cook and do their laundry (outsourcing), so to speak? Yes.

The point of all this is that countries like China and India may have been late jumping on the wagon but there is no longer any reason nor necessity for them to continue riding in somebody else's wagon. They can now afford their own. They don't have to do other people's laundry anymore. This is why I advise the movers and the shakers of the East to take a good look at Project COSA. COSA is the solution to the nasty problem that everyone has been talking about. It's the one solution that the West cannot touch for fear of dirtying their "noble" hands and insulting their gods.

There is a revolution coming, no doubt about it. The market wants it and what the market wants the market will get, by whatever means possible. Who will come out unscathed? Who will cease the opportunity and lead the revolution? The East or the West? Can the West wake up out of its drunken stupor and realize the error of its ways and repent in time? Seriously, I don't think so. I have seen first hand the power and inertia of conservatism. The old guard will not be replaced without a fight. There is too much at stake... unless, of course, the revolution happens in the East. Then they would have to stand up and take notice.

top

The Real Cost of Software Unreliability

MOBE2001 MOBE2001 writes  |  more than 7 years ago

According to the National Center for Statistics and Analysis, in 2005, over 43,000 people were killed in traffic accidents in the U.S. alone. I don't know what the number is for the entire world but it must be in the six digits. No one can fault software unreliability for those fatalities since human drivers were at fault, but what if I told you that the reason that human beings are driving cars and trucks on the road and killing themselves in the process is that unreliability imposes an upper limit on the complexity of software systems? We could conceivably be riding in self-driving vehicles right now but concerns over safety and reliability will not allow it. In addition, the cost of developing safety-critical software rises exponentially with the level of complexity. The reason is that complex software is much harder to test and debug.

What will it take to convince the computer industry to change over to a new paradigm that will make it possible to automate all vehicles? What will it take to convince software developers that complexity no longer has to be an enemy but can and should be a trusted friend? What will it take to convince them that there is a way to build bug-free software of arbitrary complexity? What will it take? Are 43,000 dead men, women and children not enough?

In my opinion, most of the funds allocated for traffic research by the U.S. Department of Transportation should be used to find a solution to the software reliability crisis. Why? Because the solution would save lives. Are you listening, Secretary Peters?

top

What Computing Should Have Been

MOBE2001 MOBE2001 writes  |  more than 7 years ago

The Hidden Nature of Computing

The biggest problem with the universal Turing machine (UTM) is not so much that it cannot be adapted to certain real-world parallel applications but that it hides the true nature of computing. Most students of computer science will recognize that a computer program is, in reality, a behaving machine (BM). That is to say, a program is an automaton that detects changes in its environment and effects changes in it. As such, it belongs in the same class of machines as biological nervous systems and integrated circuits. A basic universal behaving machine (UBM) consists, on the one hand, of a couple of elementary behaving entities (a sensor and an effector) or actors and, on the other, of an environment (a variable).

More complex UBMs consist of arbitrarily large numbers of actors and environmental variables. This computing model, which I have dubbed the behavioral computing model (BCM), is a radical departure from the Turing computing model (TCM). Whereas a UTM is primarily a calculation tool for solving algorithmic problems, a UBM is simply an agent that reacts to one or more environmental stimuli. As seen in the figure below, in order for a UBM to act on and react to its environment, sensors and effectors must be able to communicate with each other.

The main point of this argument is that, even though communication is an essential part of the nature of computing, this is not readily apparent from examining a UTM. Indeed, there are no signaling entities, no signals and no signal pathways on a Turing tape or in computer memory. The reason is that, unlike hardware objects which are directly observable, software entities are virtual and must be logically inferred.

Fateful Choice

Unfortunately for the world, it did not occur to early computer scientists that a program is, at its core, a tightly integrated collection of communicating entities interacting with each other and with their environment. As a result, the computer industry had no choice but to embrace a method of software construction that sees the computer simply as a tool for the execution of instruction sequences. The problem with this approach is that it forces the programmer to explicitly identify and resolve a number of critical communication-related issues that, ideally, should have been implicitly and automatically handled at the system level. The TCM is now so ingrained in the collective mind of the software engineering community that most programmers do not even recognize these issues as having anything to do with either communication or behavior. This would not be such a bad thing except that a programmer cannot possibly be relied upon to resolve all the dependencies of a complex software application during a normal development cycle. Worse, given the inherently messy nature of algorithmic software, there is no guarantee that they can be completely resolved. This is true even if one had an unlimited amount of time to work on it. The end result is that software applications become less predictable and less stable as their complexity increases.

Excerpted from The Silver Bullet: Why Software Is Bad and What We Can Do to Fix It

top

The Simple and Beautiful Secret of Truly Reliable Software

MOBE2001 MOBE2001 writes  |  more than 7 years ago

The secret of constructing reliable software is not rocket science. The secret is in the timing. Nothing must be allowed to happen before or after its time. If you could control the timing of events in a software system in such a way that the system's complex temporal behavior becomes thoroughly predictable, you could, as a result, construct a software sentinel that would automate the job of the discovering and enforcing the temporal laws that govern the system's behavior. Additions and/or modifications to the system are not allowed to break the exisiting timing protocols thereby insuring solid consistency. The beauty of this is that it makes it possible to create software programs of arbitrary complexity without incurring the usual penalties of unreliability and high development costs. This simple and beautiful secret will usher in the golden age of automation. This is the promise of Project COSA.

top

Why Quantum Computing Is Bunk

MOBE2001 MOBE2001 writes  |  more than 7 years ago

Paul Feyerabend, the foremost science critic of the last century, once wrote in his book 'Against Method' that "the most stupid procedures and the most laughable results in their domain are surrounded with an aura of excellence. It is time to cut them down in size, and to give them a more modest position in society." Feyerabend was speaking of scientists in general but he may as well have been talking about the new "science" of quantum computing. Quantum computing is based on the so-called Copenhagen interpretation of quantum mechanics. The idea is that the states of certain quantum properties, such as the spin of a particle, are superposed, meaning that a quantum property can have multiple states simultaneously. The blatantly ridiculous nature of this belief has not stopped an entire research industry from sprouting everywhere in the academic community. Read the rest of the article.

top

MOBE2001 MOBE2001 writes  |  more than 9 years ago

Why Software Is Bad and What We Can Do to Fix It

There is something fundamentally wrong with the way we create software. Contrary to conventional wisdom, unreliability is not an essential characteristic of complex software programs. In this article (see link below), I propose a silver bullet solution to the software reliability and productivity crisis. The solution will require a radical change in the way we program our computers. I argue that the main reason that software is so unreliable and so hard to develop has to do with a custom that is as old as the computer: the practice of using the algorithm as the basis of software construction. I argue further that moving to a signal-based, synchronous software model will not only result in an improvement of several orders of magnitude in productivity, but also in programs that are guaranteed free of defects, regardless of their complexity.

Slashdot Login

Need an Account?

Forgot your password?