Beta

Slashdot: News for Nerds

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Astonishing Speedup In Solving Linear SDD Systems

kdawson posted more than 3 years ago | from the want-to-see-it-again? dept.

Education 157

eldavojohn writes "A new paper (PDF) out of Carnegie Mellon University shows how to solve symmetric diagonally dominant linear systems much faster than before. The technique employs graph theory, randomized algorithms, and linear algebra to achieve an astonishing advantage over current methods to solve such systems. From the article: 'The result is a significant decrease in computer run times. The Gaussian elimination algorithm runs in time proportional to s^3, where s is the size of the SDD system as measured by the number of terms in the system, even when s is not much bigger the number of variables. The new algorithm, by comparison, has a run time of s*[log(s)]^2. That means, if s = 1 million, that the new algorithm run time would be about a billion times faster than Gaussian elimination.' Developers out there who maintain matrix packages and linear algebra tools might want to take a peak at the paper. Anyone who has modeled real-world systems will be able to tell you that speedups in linear algebra algorithms have a weighty effect on efficiency in simulations — especially as one approaches the theoretical limits of optimality. This research is currently being presented at the IEEE Symposium on Foundations of Computer Science."

cancel ×

157 comments

May I be the first to say (5, Funny)

Anonymous Coward | more than 3 years ago | (#33984876)

Huh?

Re:May I be the first to say (1)

ghostweed (762809) | more than 3 years ago | (#33984974)

take a peek=have a look take a peak=summit Everest fit of pique=be irritated Oh, and sherbet is the dessert. Sure Bert is Sesame Street. Remember that.

Re:May I be the first to say (0)

Anonymous Coward | more than 3 years ago | (#33985002)

pique can also mean "excite", as in "to pique one's interest".

Re:May I be the first to say (1)

wastedlife (1319259) | more than 3 years ago | (#33985382)

Huh??

Re:May I be the first to say (1)

wastedlife (1319259) | more than 3 years ago | (#33985478)

Developers out there who maintain matrix packages and linear algebra tools might want to take a peak at the paper.

Nevermind, I see now what ghostweed was talking about. Would help if he quoted that and made a new thread instead of replying to a random thread with seemingly random babble that makes Spamland [youtube.com] seem coherent.

Re:May I be the first to say (2, Funny)

Gilmoure (18428) | more than 3 years ago | (#33985836)

Tonight's the night I shall be talking about of flu the subject of word association football. This is a technique out a living much used in the practice makes perfect of psychoanalysister and brother and one that has occupied piper the majority rule of my attention squad by the right number one two three four the last five years to the memory. It is quite remarkable baker charlie how much the miller's son this so-called while you were out word association immigrants' problems influences the manner from heaven in which we sleekit cowering timrous beasties all-American Speke, the famous explorer. And the really well that is surprising partner in crime is that a lot and his wife of the lions' feeding time we may be c d e effectively quite unaware of the fact or fiction section of the Watford Public Library that we are even doing it is a far, far better thing that I do now then, now then, what's going onward christian Barnard the famous hearty part of the lettuce now praise famous mental homes for loonies like me. So on the button, my contention causing all the headaches, is that unless we take into account of Monte Cristo in our thinking George the Fifth this phenomenon the other hand we shall not be able satisFact or Fiction section of the Watford Public Library againily to understand to attention when I'm talking to you and stop laughing, about human nature, man's psychological make-up some story the wife'll believe and hence the very meaning of life itselfish bastard, I'll kick him in the Ball's Pond Road.

Re:May I be the first to say (0)

Anonymous Coward | more than 3 years ago | (#33986554)

Sesame Street's kiiinda neat, just hardly replete with the human elite, and where are the gold teeth and my dead buddy Keith and the elephants' feet and Bob Marley?

Re:May I be the first to say (1)

Nadaka (224565) | more than 3 years ago | (#33985492)

Its a fundamental paradigm shift in the basic efficiencies of solving linear equations. A reduction from quadratic omega to less than square.

Re:May I be the first to say (4, Informative)

mdmkolbe (944892) | more than 3 years ago | (#33986168)

It lets us compute more precise simulations faster. This helps everything from better weather predictions, to faster simulations for industrial design, to game physics engines.

Basically if you can solve X=A*B or Ax=b more quickly then you can run simulations faster. The paper shows a way to solve Ax=b faster provided A is diagonally dominant (it usually is). Here A, B and b are given, X and x are unknowns. Upper case is matrix. Lower case is vector.

In theory simulations that took days (~10^15 cycles) will now take hundreds of microseconds (~10^5 cycles). In practice, we already have techniques that usually work on "most" inputs to these types of simulations. This paper presents a technique that works on "more" inputs to those systems. (But note that their runtime, n*[log(n)]^2, is expected time and not worst case time.) The potential impact is huge but the actual impact is yet to be seen.

Disclaimer: I've published in this area before so I have some background understanding, but it's been a while and I've read only the abstract of the paper. Once I read the full paper I'll have a better idea of whether it lives up to the hype. I'm hopeful because this could be a very important result. I'm also cautious because they use an approximation algorithm (that is what the " epilon" is about) and also use expected time rather than worst case time. These factors may limit the usefulness of the algorithm.

Re:May I be the first to say (-1, Troll)

Anonymous Coward | more than 3 years ago | (#33987164)

Better weather systems? better than what? Predicting the weather is an obscene waste of time...we will never be able to collect all of the variables to predict the weather efficiently.

Yarr (-1, Troll)

Anonymous Coward | more than 3 years ago | (#33984882)

Yarr

Wow? (5, Funny)

Jargon Scott (258797) | more than 3 years ago | (#33984896)

Good gravy, now I understand why my non-tech wife just stares blankly at me when I'm describing my plain old IT job.

Re:Wow? (1, Funny)

Anonymous Coward | more than 3 years ago | (#33984916)

She could also be autistic or wishing she were blind AND deaf.

Re:Wow? (1)

zhong-guo (1872764) | more than 3 years ago | (#33985066)

talking with wife for work is beating the sheep for music. old chinese yan yu

Re:Wow? (1)

box4831 (1126771) | more than 3 years ago | (#33985180)

by Jargon Scott (258797)

I am interested in purchasing some legless dogs from you

Re:Wow? (1)

Jargon Scott (258797) | more than 3 years ago | (#33985472)

They're like cuddly throw pillows.

A Perfect Slashdot Article (5, Insightful)

GrifterCC (673360) | more than 3 years ago | (#33984898)

I can tell it's truly News for Nerds because I can barely understand what it's saying and it drops causal references to advanced mathematics--the stuff I only wish I'd had the fortitude to study in college.

I'm serious. This is exactly what I want from Slashdot.

Re:A Perfect Slashdot Article (0)

Anonymous Coward | more than 3 years ago | (#33984914)

Also: If I'm reading this correctly then This Matters!

I'm looking forward to reading the paper.

Re:A Perfect Slashdot Article (2, Insightful)

jgagnon (1663075) | more than 3 years ago | (#33984940)

I was thinking something similar. I'm a fairly smart guy and this went way over my head. I love math and the problems it can solve when used creatively. I've just never had the energy to pursue it to a level these (and other) researchers have.

I feel myself chanting "I'm not worthy."

Re:A Perfect Slashdot Article (5, Insightful)

ceoyoyo (59147) | more than 3 years ago | (#33985064)

Mathematicians like it that way.

One of the math professors at Berkeley tells his students that math is messy - when you're working on a proof you start from one place, wander around for a while, then get to your destination.

Then you clean everything up and present it to the world as obvious.

Re:A Perfect Slashdot Article (1)

wwfarch (1451799) | more than 3 years ago | (#33986078)

So very true. I was a CS and Math major. Math could be frustrating at times because I'd spend all night and about 10 sheets of paper coming up with a proof. Once I condensed it all down it was about half a page. It's all about finding the correct approach.

Re:A Perfect Slashdot Article (1)

ChrisMaple (607946) | more than 3 years ago | (#33986378)

According to E. T. Bell, Gauss was particularly prone to condensing the proof so much that it was hard to understand. His motto was "few, but polished", amazing for so prolific a man.

Re:A Perfect Slashdot Article (1)

Surt (22457) | more than 3 years ago | (#33985510)

Interestingly enough, if you took college level math, this was probably only one semester over your head. And not actually a particularly complicated topic, for example, if you took 3 semesters total of calculus (including high school level calculus), what's being described here is in fact much easier to understand.

Re:A Perfect Slashdot Article (1)

clong83 (1468431) | more than 3 years ago | (#33986576)

I think it's just a matter of what you specialize in. To me, this story is really interesting, and the summary is perfectly understandable. But other times, I just scratch my head at stories that other people get excited about. I deal with solving sparse matrix systems every day, but I don't know the first thing about, say, the latest Mandriva release, or flash drivers. Or why I should care. Anyway, I'm off to read the paper now...

Re:A Perfect Slashdot Article (1)

$RANDOMLUSER (804576) | more than 3 years ago | (#33984954)

Yup. Then we see:

Developers out there who maintain matrix packages and linear algebra tools might want to take a peak at the paper.

Which is exactly what we expect from Slashdot.

Re:A Perfect Slashdot Article (1)

splutty (43475) | more than 3 years ago | (#33984956)

I can tell it's truly News for Nerds because I can barely understand what it's saying and it drops causal references to advanced mathematics--the stuff I only wish I'd had the fortitude to study in college.

Okay. Now you've confused me. Do you mean it drops causal references to advanced mathematics, in which case this article would have a direct impact on the advance of advanced mathematics itself, or did you mean to say it drops casual references to advanced mathematics, in which case it is assumed that the advance of advanced mathematics is something most people should understand? What?

Uhm. I think I've confused myself even more now.

Waaaaah!

This is exactly what I want from Slashdot, too (2, Insightful)

G3ckoG33k (647276) | more than 3 years ago | (#33985026)

I mean the article "In Florida, a Cell Phone Network With No Need For a Spectrum License"...

What is that?! Not news for nerds... Possibly news for techies.

Re:A Perfect Slashdot Article (4, Insightful)

Peach Rings (1782482) | more than 3 years ago | (#33985038)

Why do you need to go to college? Here, watch Linear Algebra [mit.edu] and Algorithms [mit.edu] .

Also there is no remotely advanced math dropped in TFA, though the actual paper of course is cutting edge research.

Re:A Perfect Slashdot Article (3, Funny)

jdgeorge (18767) | more than 3 years ago | (#33985118)

Or Khan Academy [khanacademy.org] . Out of curiosity, I watched all of Khan's math topics that weren't over my head. Next, I'm on to Addition 2!

Re:A Perfect Slashdot Article (0)

Anonymous Coward | more than 3 years ago | (#33985380)

KHAAAANNNN!!!! [youtube.com]

Gilbert Strang is awesome. (2, Interesting)

nten (709128) | more than 3 years ago | (#33985130)

If I had had Gilbert Strang as an instructor for linear algebra instead of who I did have, maybe I would understand what the article is talking about. Having watched those videos I repeatedly said to myself "oh thats what we were doing!" Are covariance matrices SDDs? If so could this be used to speed up principle component analysis?

Re:Gilbert Strang is awesome. (0)

Anonymous Coward | more than 3 years ago | (#33985446)

Well that really depends on your definition of

the theoretical limits of optimality

. My definition for that is "wtf, huh?".

covariance matrices are generally not SSD (1)

S3D (745318) | more than 3 years ago | (#33985892)

SSD - symmetric diagonally dominant, mean the diagonal element is more the sum of abs all the rest in the row. That is a very strong condition, which happen in very specific applications. Never met them in the computer vision and image processing for example.

Re:covariance matrices are generally not SSD (1)

clong83 (1468431) | more than 3 years ago | (#33986680)

Perhaps not, but I bet the condition number of any said matrix is also very high. Although perhaps rare in your case, they would still be one of the toughest nuts to crack when they do show up. So this is still good news. If their method works as advertised.

Re:covariance matrices are generally not SSD (1)

Daniel Dvorkin (106857) | more than 3 years ago | (#33986884)

Never met them in the computer vision and image processing for example.

On the other hand, I meet them all the time in statistical programming, which makes me think that as image processing algorithms get more probabilistically based, speedups like the one discussed here may be very useful to people working in your field. I know they will be in mine, and I'm looking forward to seeing practical implementations.

Re:Gilbert Strang is awesome. (1)

Daniel Dvorkin (106857) | more than 3 years ago | (#33987014)

Are covariance matrices SDDs?

I think SDD is sufficient for positive definiteness, and therefore for a matrix to be a valid covariance matrix, but not necessary. Anyone got an answer on this?

Re:A Perfect Slashdot Article (0)

Anonymous Coward | more than 3 years ago | (#33985082)

I can tell it's truly News for Nerds because I can barely understand what it's saying and it drops causal references to advanced mathematics--the stuff I only wish I'd had the fortitude to study in college.

I don't think the submitter understood it either. "The technique employs ... linear algebra" - to solve linear systems. Well, duh! You can hardly solve a problem if you refuse to use the tools which allow you to *state* the problem.

Re:A Perfect Slashdot Article (0)

Anonymous Coward | more than 3 years ago | (#33987196)

I don't think the submitter understood it either. ... Well, duh! You can hardly solve a problem if you refuse to use the tools which allow you to *state* the problem.

Most likely a Visual Basic programmer. ;)

Re:A Perfect Slashdot Article (0)

Anonymous Coward | more than 3 years ago | (#33985110)

I can tell it's truly News for Nerds because I can barely understand what it's saying and it drops causal references to advanced mathematics--the stuff I only wish I'd had the fortitude to study in college.

I'm serious. This is exactly what I want from Slashdot.

I recommend you start visiting arXiv [arxiv.org] then.

Re:A Perfect Slashdot Article (1)

plcurechax (247883) | more than 3 years ago | (#33987010)

I can tell it's truly News for Nerds because I can barely understand what it's saying and it drops causal references to advanced mathematics

I recommend you start visiting arXiv [arxiv.org] then.

Are you suggesting the OP, a self-described interested lay person, learns or even mere follow mathematic research by reading arXiv? If so, WTF!?

arXiv is a pre-print archive of original research articles, not exactly a welcoming place for a non-mathematician (or non-subject specialist, e.g. physics, and computer science also use it). Even with an undergrad degree in mathematics, I find it a difficult (and/or useless) place to try to follow progress in the field, without the editorial assistants to filter the wheat from the chaff. And I've been reading original (first source) research papers since the mid-1990s in multiple research disciplines.

You might as well ask him to read Euclid's Elements [utexas.edu] in its original Greek. Heck, after the translation, it would be more accessible, as it is intended to be a textbook for learning.

I would rather suggest, try reading some of the mathematics journals that are intended to be more accessible, such as from MAA [maa.org] and AMS [ams.org] societies. Some are aimed at students of two-year and four-year "colleges" (aka polytechs / technical colleges and universities), while others are just interesting yet often accessible, such as Journal of Recreational Mathematics [baywood.com] and Mathematics Magazine [maa.org] and online columns such as Kevin Devlin's Devlin's Angle [maa.org] .

In the more general sense, I would recommend popular math writers such as Ian Stewart [warwick.ac.uk] , Simon Singh [simonsingh.net] , Paul J. Nahin [unh.edu] , the recently deceased Martin Gardner [scientificamerican.com] (slashdot [slashdot.org] ), and many more authors that I cannot recall.

Unfortunately I can't think of any pop-math books or articles on linear algebra, in the vein of "e: The Story of a Number" (Maor), "An Imaginary Tale" (Nahin), "Flatland" (Abbott), "Flatterland" (Stwart), "A Mathematician's Apology" (Hardy), "Fermat's Last Theorm" / "Fermat's Engima" (US) [simonsingh.net] (Singh), "Does God Play Dice?" (Stewart), "Chaos" (Gleick), and many others.

To wit, mathematics is I believe the only discipline where fourth year undergrad students take third or fourth year courses with "introduction" or "elementary" in their course titles. But I digress. My point is that one "problem" is that given mathematics long history, and that is has fascinated people across cultures throughout history, the subject has accumulated such a vast body of knowledge, so it is difficult to get a firm understanding on every field within mathematics. So feeling overwhelmed with all the facts and fields to learn is normal.

Re:A Perfect Slashdot Article (1)

Mitchell314 (1576581) | more than 3 years ago | (#33985132)

SDD means something like a matrix with diagonal entries much larger than the other entries. Matrices are used in computing when you have a bunch of variables, a bunch of (linear) functions of those variables, and you want to solve for them. Classic example: in 3d graphics, you have a point in space (three variables: x, y, and z coordinate) and you want to map it to the computer screen (two variables: x, y). There's a bunch of functions you have relating all of those variables, and you can use matrices to consolidate and represent those functions. Then you can solve them using matrix magic to find the right pixel on the screen to draw.

Slight problem though, matrices can contain a number of elements. For example, the size of a matrix you'd use in 3d graphics is a 4*4, which is 16 elements. And those element themselves require quite a few calculations themselves. To solve a single matrix system, it takes a lot more work when you increase the size by a little bit. And matrices can get big real fast as you add more and more variables.

Re:A Perfect Slashdot Article (1)

arth1 (260657) | more than 3 years ago | (#33986448)

It may also have an application in relational databases, where some searches (and joins) are in fact simple matrix equations.
This algorithm could possibly be implemented on the database server side to improve performance.

Re:A Perfect Slashdot Article (1)

blackcoot (124938) | more than 3 years ago | (#33985246)

let me summarize: the same math that drives google's search (spectral graph theory) has been shown to be really useful in solving a very common class of linear equations if you're willing to roll the dice (randomize algorithm).

eldavojohn (1)

Any Web Loco (555458) | more than 3 years ago | (#33985350)

props to eldavojohn too - most of the stuff he posts is interesting, and the summaries are usually good too.

Re:A Perfect Slashdot Article (5, Informative)

ObsessiveMathsFreak (773371) | more than 3 years ago | (#33986132)

I can tell it's truly News for Nerds because I can barely understand what it's saying and it drops causal references to advanced mathematics--the stuff I only wish I'd had the fortitude to study in college.

You more than likely did study this in college. This involves linear algebra, specifically the inversion of matrices and/or solving linear system. Ax=b, where A is an mxn matrix, and x and b are nx1 and mx1 vectors respectively. What we'd like to do is solve for x using x=A^-1 b, where A^-1 is an inverse matrix of some kind. But getting the inverse is a notoriously difficult problem.

In fact, a large reason digital computers were invented at all was so that "large" matrices could be inverted. And by large, I mean 12x12 matrices (keep in mind this was the 1950s). Computer games, mp3 players, spreadsheets and the internet are all quite incidental byproducts of humanities quest to invert ever larger matrices, in less time. Though just about any half way sophisticated piece of software will invert a matrix at some point.

The reason for this is as follows: When human beings want to solve any difficult problem, they approximate it with a linear model and solve the associated linear system. Hence the ability to solve linear systems quickly and accurately is of fundamental importance. This goes double in the modern world as we increasingly rely on the ability of computers to handle ever larger linear system--that is, to invert ever larger matrices.

The biggest problem with matrices, say nxn matrices, is that the time taken for solving them by Gauss elimination goes up as 0(n^3). So while your desktop could probably invert a 1000x1000 matrix in a few seconds, it would take a year invert a million x million matrix, and would take around a million years to invert a billion x billion matrix. (Not that it could store either in memory). While Gauss elimination is a pretty poor algorithm efficiency-wise, this issues are unavoidable for general matrices.

However, most matrices we actually want to invert in practice are "sparse" or "diagonally dominant". The first property means that most of the elements are zero. So in a billion x billion matrix, instead of storing 10^18 numbers, you'd only have to store a few billion. Diagonally dominant means that largest entries are along the diagonal. Both of these mean you can take a lot of--often hueristic--shortcuts in your inversion algorithims to cut down on time.

These researchers claim O(s*log(s)^2) complexity for such systems. I suspect there are probably a lot of O(s^2*log(s)) system or the like anyway, but even still this is a nice improvement. I doubt it's as big a breakthrough--I suspect this is hyped anyway--but if it is an improvement you can kind of compare this to something like the Fast Fourier Transform speedup. Again, I doubt it's that large of an improvement.

I'll finish by mentioning that this all falls under the umbrella of Linear Algebra, which is an absolutely essential skill of higher mathematics, and computational mathematics. Any geeks who prides themselves on their mathematical ability, but doesn't know their linear algebra (linear systems, the four fundamental subspaces, eigenvectors/values, rotation/transformation matrices, etc) shouldn't be holding their skills in such high regard. If you don't know your LA, or have forgotten, one of the best places to start is to watch Gilbert Strang's online lecture series [mit.edu] provided by MIT. These lectures are indispensable to anyone working with linear systems. After this, those who need in depth analysis of matrix algorithms should turn to something like "Matrix Algorithms" by G. W. Stewart [amazon.com] which goes into more detail and shows the more canonical algorithms. A little knowledge of linear algebra goes a long way in mathematics.

...Or failing all that, you could just open up MATLAB... or Octave. I recommend the educational route first though.

Discrete Math &/or Linear Algebra cover it all (0)

Anonymous Coward | more than 3 years ago | (#33986588)

"I can tell it's truly News for Nerds because I can barely understand what it's saying and it drops causal references to advanced mathematics--the stuff I only wish I'd had the fortitude to study in college. I'm serious. This is exactly what I want from Slashdot." - by GrifterCC (673360) on Friday October 22, @10:12AM (#33984898)

Per my subject-line above: Linear Algebra (matrix math really, with all of its 1:1 mappings & "wave of the hand left to bottom" (iirc) processing order (You have to SEE this last one to get what I mean here as to how you match/map & proceed thru matrices for multiplication for instance)) &/or Discrete Math (Graph Theory is covered here, & it's only a PART of this course (hardest course I felt I ever took in collegiate academia in fact, bar-none)) both cover a LARGE chunk of what's involved above!

(Additionally? Well, you don't have to be a "math major" to have taken them in collegiate academia either - they're both typically "std. fare" for Comp. Sci. majors generally also!)

APK

Sounds like multigrid (4, Interesting)

azaris (699901) | more than 3 years ago | (#33984906)

Multigrid [wikipedia.org] is theoretically O(s), so I don't immediately see how this is such a huge leap. Of course the actual complexity also depends on the problem and the implementation. Maybe their method.is applicaple to a wider variety of problems.

Also, the "iterated sparsifying" sounds a lot like algebraic multigrid.

Re:Sounds like multigrid (4, Interesting)

HighFlyer (60002) | more than 3 years ago | (#33985000)

I had the same thought but I guess multigrid is limited to specific matrices while the new algorithm seems to be more generic. Not sure though, my last contact with multigrid lies 14 years ago (when algebraic multigrid was a hot topic in science).

Re:Sounds like multigrid (2, Informative)

EdZ (755139) | more than 3 years ago | (#33985034)

Maybe their method.is applicaple to a wider variety of problems.

From TFA, that's exactly what this is.

Re:Sounds like multigrid (1)

StripedCow (776465) | more than 3 years ago | (#33985084)

Same thoughts here. I cannot understand why this is a huge leap practically speaking.
However, the new method might be of theoretical interest, since it uses techniques which are totally different from multigrid.
The fact that it applies only to SDD matrices is a bit disappointing though (multigrid can be applied to a wider range of problems).

Re:Sounds like multigrid (2, Interesting)

TrekkieGod (627867) | more than 3 years ago | (#33985164)

Multigrid [wikipedia.org] is theoretically O(s), so I don't immediately see how this is such a huge leap. Of course the actual complexity also depends on the problem and the implementation. Maybe their method.is applicaple to a wider variety of problems.

Also, the "iterated sparsifying" sounds a lot like algebraic multigrid.

I think the wider variety of problems is exactly the major advantage. A symmetrically diagonally dominant matrix is not necessarily sparse, but it does come up a lot. I know from experience that nodal-analysis based circuit simulators will end up like that most of the time, since each matrix cell represents a node connection, and when you have a circuit element between node i and j, it's obviously also between j and i (exceptions made when modeling things like transistors, because you need to handle the gate differently). The diagonal values consist of the sum of the entire row plus whatever's connected to ground, so it's always dominant.

That said, circuit simulators also tend to come up with sparse matrices most of the time, since you rarely have every node being connected to every other node...

Re:Sounds like multigrid (1)

blackcoot (124938) | more than 3 years ago | (#33985284)

how so?

Re:Sounds like multigrid (4, Funny)

DriedClexler (814907) | more than 3 years ago | (#33985410)

I know an O(s) algorithm for solving linear systems, but it only works on diagonal matrices ...

Grain of salt (4, Informative)

l00sr (266426) | more than 3 years ago | (#33984928)

Though the result sounds genuinely interesting, I'm not sure the comparison to Gaussian elimination is fair. The summary gives the O(s^3) asymptotic run time of Gaussian elimination for dense matrices, but there are much better alternatives for a sparse matrix, which is what the paper applies to. For example, if your matrix has the sparsity pattern of a planar graph, it has been known for 30 years that you can do Gaussian elimination in O(n^1.5). This result, however, seems to have the potential to be more widely applicable while producing a better asymptotic bound. So, sounds like great work, but not quite as amazing as the hype in the summary.

Re:Grain of salt (1)

Peach Rings (1782482) | more than 3 years ago | (#33984994)

Also the summary seems to have a fundamental misunderstanding of asymptotic analysis :)

Re:Grain of salt (0)

Anonymous Coward | more than 3 years ago | (#33985058)

Where did you get the notion that the method presented only applies to sparse matrices?

Re:Grain of salt (2, Informative)

blackcoot (124938) | more than 3 years ago | (#33985202)

not really.

the method applies to the _general_ (i.e. dense) case of diagonally dominant (i.e. |A_i,i| > |A_i,j| for all i,j) symmetric systems that show up all the time (grammian matrices, for example), not just the sparse case.

Re:Grain of salt (1)

Froze (398171) | more than 3 years ago | (#33985690)

Math typo:
I think you meant for all j != i

Re:Grain of salt (0)

Anonymous Coward | more than 3 years ago | (#33986500)

Yes, but, 's' is not a size of problem (n, number of variables or vertices), but number of non-zero terms (edges). Becuase for dense graph/matrix, s=n^2, then this methods is really O(n^2 \log^2 n), which is better than Gausiann elimination or Cholesky decomposition (both being O(n^3)). But, this algorithm is a) probabilistic, b) approximate. You need to repeat it many times, to have good confidence, and small error. In what it is better than Gauss-Seidel iteration?

Re:Grain of salt (2, Interesting)

ObsessiveMathsFreak (773371) | more than 3 years ago | (#33986384)

For example, if your matrix has the sparsity pattern of a planar graph, it has been known for 30 years that you can do Gaussian elimination in O(n^1.5).

This algorithm is supposed to run in O(n log(n)^2) time, which is actually better than O(n^1.5). Not a lot better mind you, but if you make n very large and don't have a lot of overheads, you should see improvements.

Re:Grain of salt (1)

VicDiesel (1550463) | more than 3 years ago | (#33986758)

I'll have to read the paper to see precisely how widely applicable it is, but SDD systems are not quite as interesting as they claim. Nor is solving linear systems with them. Netflix and Google &c do not solve linear systems with their SDD matrices: they need the dominant eigenvector. To find that you run the power method, which is matrix-vector multipication. Much simpler. CFD and such applications they mention do give rise to SDD systems, but only in the case of diffusion, no convection. Also, diagonal dominance comes from using 2nd order methods (finite element or finite difference) only. Higher degrees of approximation gives SPD (symmetric positive definite) systems. Now, if they could crack that problem it would be truly interesting. V.

Re:Grain of salt (1)

unperson (223869) | more than 3 years ago | (#33987292)

My thoughts exactly...

Symmetric, (Strictly) diagonally dominant matrices are great: Non-singular, real spectrum, diagonalizable...In fact, purely from the fact that the eigenvalues can be bounded away from 0, many iterative methods will have provably fast O(n^2) convergence...beating the classic O(n^3) by an order of magnitude.

I'm not up to speed in the particular tricks used for the Symmetric, DD regime, but certainly one would only "naively" try solving this using Gaussian elimination, due to the special structure. One thing I thought was interesting was that the authors mention that the "previous" fast algorithm solves in:

O(n*log(n)^25).

Well, for n 10^52 (HUUUUUUUUUUUGE!!!) n^2 is less than nlog(n)^25, so there complexity constant becomes really important!!! I can't imagine that the "previous" algorithm was useful (practically speaking!)

-unperson

WTF, Over? (4, Funny)

toygeek (473120) | more than 3 years ago | (#33984960)

Who knew that when I got up to check my daily's (Incl /.) that I'd end up with scrambled brains for breakfast.

Re:WTF, Over? (1)

gstoddart (321705) | more than 3 years ago | (#33985076)

Who knew that when I got up to check my daily's (Incl /.) that I'd end up with scrambled brains for breakfast.

Mmmmm ..... brains!!!

Re:WTF, Over? (1)

WalkingBear (555474) | more than 3 years ago | (#33985604)

Who knew that when I checked *my* daily's, including /., that I'd see one of the very rare articles that stretches my brain a bit.

Math.. it's what's for breakfast.

Take a "peak"? (5, Funny)

jcwren (166164) | more than 3 years ago | (#33984964)

Take a Mt. McKinley at the paper? Or a K2?

Spell checkers are not a replacement for actually reading what you've written.

Re:Take a "peak"? (1)

MightyYar (622222) | more than 3 years ago | (#33985072)

Maybe it was a pun?

Re:Take a "peak"? (0)

Anonymous Coward | more than 3 years ago | (#33987080)

Maybe the op meant "leak"? Or "leek"?

Re:Take a "peak"? (1)

Arrepiadd (688829) | more than 3 years ago | (#33985102)

Or maybe spell checkers are not a replacement for the editor's work, as not all submitters in Slashdot have English as their first language and peek/peak aren't that hard to confuse (as then/than, farther/further, ...).

2,000 year-old problem for developers (3, Funny)

digitaldc (879047) | more than 3 years ago | (#33984968)

"Solving these linear systems can be time consuming on even the fastest computers and is an enduring computational problem that mathematicians have sweated for 2,000 years."

I know what you mean. I remember Pythagoras, Euclid and Aristotle writing about sweating over their laptops while trying to smooth-out their proofs and algorthims.

Re:2,000 year-old problem for developers (1, Funny)

Anonymous Coward | more than 3 years ago | (#33985036)

Archimedes too.. he used to move the sand grains of his finite state machine with a stick to perform such calculations :)

Why use a direct solver? (3, Informative)

ponraul (1233704) | more than 3 years ago | (#33985068)

Comparing the solver's complexity to Gaussian-Elimination isn't useful. No one in their right mind uses direct solvers on large linear systems.

Anyway, if the system is symmetric and dominant diagonal, SOR is the standard choice in iterative solvers.

Re:Why use a direct solver? (0)

Anonymous Coward | more than 3 years ago | (#33985626)

Well, if your system is symmetric, sparse and dominant diagonal, you have a couple of choices in the domain of iterative solvers.

But it is well known that you require:
  * a well conditioned system
  * or a pre-conditioning algorithm

These iterative solvers do not work equally well on all systems...

For just sparse but not diagonal-symmetric system here is still the option to use sparse variants of the Gaussian which only solve
sub-matrices the tedious way. These exist for more than 30 years and are still widely used.

Therefore the abstract is a big hype.

Re:Why use a direct solver? (1)

VicDiesel (1550463) | more than 3 years ago | (#33987064)

Let me give you a system from high order finite elements and see your iterative method crash and burn. And SOR has not been the standard choice for the last 30 years. Some version of Conjugate Gradient methods (such as GMRES or BiCGstab) is what almost everyone uses.

Must be sparse (0)

Anonymous Coward | more than 3 years ago | (#33985080)

Only works for sparse matrices.
Yawn...

Re:Must be sparse (1)

interval1066 (668936) | more than 3 years ago | (#33985156)

Right, sparse or loosely-coupled systems; that means these concepts can model data that is made up mostly of zeros. If anyone gives a crap the application that comes to mind is bitmap compression & maybe some steganography applications. Of course if you lived your whole life and never heard of this stuff you could quite probably have lived a full and enriching life anyway.

Re:Must be sparse (1)

gnasher719 (869701) | more than 3 years ago | (#33985608)

Right, sparse or loosely-coupled systems; that means these concepts can model data that is made up mostly of zeros.

It has to be sparse, because O (s * (ln (s))^2) doesn't even allow you to look at each of the s^2 elements of the matrix when s is large enough.
There must be some dependency on _how_ sparse the matrix is, or something like "O (s * (ln (s))^2) if the number of elements off the diagonal is at most O (s * ln (s))".

Re:Must be sparse (0)

Anonymous Coward | more than 3 years ago | (#33985840)

This method is not just for sparse systems. s is the number of matrix non-zeros.

Re:Must be sparse (1)

goodmanj (234846) | more than 3 years ago | (#33985620)

Sparse symmetric systems are actually *very* common in the physical world. The numerical solution to any elliptic partial differential equation will generally boil down to the solution of a sparse symmetric matrix equation -- that includes the static Maxwell's equations for electricity and magnetism, pressure equations for fluid flow, and many others.

Re:Must be sparse (1)

frank_adrian314159 (469671) | more than 3 years ago | (#33987354)

If anyone gives a crap the application that comes to mind is bitmap compression & maybe some steganography applications.

And FEA and electronic circuit simulation and molecular simulations (with and without solvation) and a whole lot o'analog crap that gets simulated every day. Very sparse matrix algebra. Although I still wonder how well the numerics hold up under stiffness, ill-conditioning, etc.

Not worthy of the front page. (4, Informative)

mcg1969 (237263) | more than 3 years ago | (#33985094)

I agree with a previous poster who said it is unfair to compare this algorithm to Gaussian Elimination. Frankly, it seems to me that the poster has taken enough numerical analysis to know that GE is O(n^3), but is not familiar with the much wider body of research into efficient methods for solving structured linear systems.

Symmetric diagonally dominant linear systems are among the best conditioned systems you could possibly construct, which means that it is a rare case that you would ever use GE. Instead, you would use an iterative method, such as conjugate gradients or a stationary method like the Chebyshev method discussed in the paper. While it does seem that they are proposing a novel way to construct an efficient iterative method, the truth is that once you've entered the realm of iterative methods it's not likely there is going to be a single approach that works best in every instance. Even the paper's author agree that this is a "conceptually simple and possibly practical iterative solver." They rightly recognize that what works well on paper may not always prove useful in practice. So I think we should take the authors' own advice and wait for further research into this approach.

Re:Not worthy of the front page. (1)

Rockoon (1252108) | more than 3 years ago | (#33985844)

You are correct. There are quite a few algorithm fields where papers use 'standardized' problems to compare their results to other methods, but those problems contain only extremely simplified and extremely degenerate cases, rather than practical use cases.

In the case of things like Genetic Algorithms, the simplified problem is often the "One Max" problem and the hard problem is the degenerate "Trap" problem (where the next-best solutions are all maximally far away in state space.)

In the theoretical the research which tackles these standardized problems are interesting, but in applied usage the user doesnt really care at all about those standardized cases because they don't even remotely resemble the problem they are solving.

So while reading the article, all I can think is "its great for 'denoising, image blending and segmentation'" but continue to wonder what practical use this will have that I could leverage for solving problems that I might encounter. Will I one day be combing over their paper in order to implement it, or will I be skipping over their paper because someone else actually generalized it? (ie, like I skipped over Compact Genetic Algorithms w/Tournament Selection and went straight to Extended Compact Genetic Algorithms)

Re:Not worthy of the front page. (0)

Anonymous Coward | more than 3 years ago | (#33986056)

The iterative CG-type method LSQR (http://www.stanford.edu/group/SOL/software/lsqr.html) is our workhorse for most systems. The main reason for this is that matrix A may be square or rectangular (over-determined or under-determined), and may have any rank. For square systems the method is not so effective, unless you can find a good preconditioner.

Fuckin' linear SDD systems... (0)

Anonymous Coward | more than 3 years ago | (#33985112)

...how do they work?

SSD (1)

allanw (842185) | more than 3 years ago | (#33985138)

Who else read the title and was excited thinking there was an astonishing speed-up in SSD systems?

Re:SSD (1)

blackfrancis75 (911664) | more than 3 years ago | (#33985232)

There will be for me - at least, that is, if OSX 10.7 Lion supports TRIM. ;)

Re:SSD (1)

RivenAleem (1590553) | more than 3 years ago | (#33985252)

Wait, it isn't? It said something about computer run times. Run time isn't a new name for fetch cycle?

Good news for the ~ mathmeticians (0)

Anonymous Coward | more than 3 years ago | (#33985276)

... in the world who make a career out of discovering and trying to use this stuff.

On a lighter note:
The basic concepts behind linear algebra are quite simple and I don't understand why it is not taught sooner than to junior/senior level math/physics majors.
Having studied both the theory and application (albeit only in undergrad courses) of linear algebra in college, I have to wonder why these concepts are not taught to grade school students in place of normal algebra. In my observations as a student and tutor it seems that most prominent source of anxiety for math students is dealing with the concept that the letter x or y or whateverhaveyou represents a number. Having done guass-jordan elimination until my knuckles bled (unfortunately, quite literally. gotta love college), I can imagine a grade-school aged child having far less trouble applying simple arithmetic row operations than most seem to have with basic algebra. Then again, I'm not a PhD educational psychologist, my mother is.

Re:Good news for the ~ mathmeticians (0)

Anonymous Coward | more than 3 years ago | (#33986038)

It is - we study it in highschool over here in Europe.

Does this mean anything for crypto? (1)

jonwil (467024) | more than 3 years ago | (#33985280)

Is this particular kind of funky advanced math used for anything in crypto or information security and does this new speedup help crack that?

Re:Does this mean anything for crypto? (1)

godrik (1287354) | more than 3 years ago | (#33986904)

There are probably no implication for cryptography. Solving linear system is a well known and polynomial problem. We can do it faster now for some well structured matrices. But We already had quite fast algorihtm for doing it. Cryptography usually does not rely on solving linear systems, they are way too easy to solve.

Smart grid applications (1)

PPH (736903) | more than 3 years ago | (#33985376)

This could be interesting for predictive control of the power grid based on a real time model. Current (no pun intended) technology can't do a real time simulation, or must make some crippling simplification assumptions, or is based on some proprietary algorithms (which may be junk, but we can't validate them). Problems arising from variability of wind energy sources or effects of faults on system stability could be handled much more easily.

Also, maybe someone can incorporate this into SPICE [wikipedia.org] so I can run simulations of circuits of more than moderate complexity without dying of old age waiting for them to complete.

Applications (2, Insightful)

goodmanj (234846) | more than 3 years ago | (#33985568)

Just to give people an idea of how useful this is, here are a couple examples of real-world problems that this may help with:

Aerodynamics simulations
Weather and ocean models
Electric and magnetic field models
Earthquake and geophysics models

It may also be useful for modeling wave processes (light, sound, etc) depending on how they're handled. As a general rule, any time you use a computer to solve problems involving electricity and magnetism or the pressures and stresses on solid or fluid materials, you're going to be encountering linear symmetric diagonally-dominant matrices.

In many applications, approximate solutions are faster and more useful than the exact solution described here, but it's still a big deal.

I know who needs this (0)

Anonymous Coward | more than 3 years ago | (#33985842)

Please send this to the IPCC....oh wait, this would probably makes things MORE accurate....

The Classic Napoleon Dynamite Problem Solved! (1)

Baldrson (78598) | more than 3 years ago | (#33985962)

The world has long been awaiting the algorithm that can predict whether a Netflix viewer will love or hate Napoleon Dynamite.

Of course, if it can do that....

Help? (1)

CAIMLAS (41445) | more than 3 years ago | (#33986416)

I don't suppose one of you hardware/math geeks could explain to the hordes of lowly IT/storage/server guys what this actually means to us, pragmatically?

I can't even tell at which 'level' this speed-up would occur. Software support/implementation for SSDs? SSD controllers? What are the practical implications?

Re:Help? (1)

chefmonkey (140671) | more than 3 years ago | (#33987316)

SDD != SSD.

"SDD" stands for "symmetric diagonally dominant."

A billion times faster (0)

Anonymous Coward | more than 3 years ago | (#33987142)

Can someone please explain how s*[log(s)]^2 is a billion times faster than s^3 if s = 1 000 000?

Re:A billion times faster (1)

chefmonkey (140671) | more than 3 years ago | (#33987526)

Uh. Math?

1000000 ^ 3 = 1000000000000000000
1000000 * (log(1000000))^2 = 169000000
1000000000000000000 / 169000000 = 5917159763

So it's more like 5.9 billion -- but "billion" is the right order of magnitude.

Really cool (2, Informative)

frank_adrian314159 (469671) | more than 3 years ago | (#33987264)

The nice thing about this is that things like electrical circuit simulations, FEA matrices, etc. tend to be SDD. This may speed things up a bit.

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Create a Slashdot Account

Loading...