Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Visualizing Algorithms

Soulskill posted about 2 months ago | from the grab-your-binoculars-and-go-code-watching dept.

Math 50

An anonymous reader writes "Many people reading this site probably have a functional understanding of how algorithms work. But whether you know algorithms down to highly mathematical abstractions or simple as a fuzzy series of steps that transform input into output, it can be helpful to visualize what's going on under the hood. That's what Mike Bostock has done in a new article. He walks through algorithms for sampling, shuffling, and maze generation, using beautiful and fascinating visualizations to show how each algorithm works and how it differs from other options.

He says, "I find watching algorithms endlessly fascinating, even mesmerizing. Particularly so when randomness is involved. ... Being able to see what your code is doing can boost productivity. Visualization does not supplant the need for tests, but tests are useful primarily for detecting failure and not explaining it. Visualization can also discover unexpected behavior in your implementation, even when the output looks correct. ...Even if you just want to learn for yourself, visualization can be a great way to gain deep understanding. Teaching is one of the most effective ways of learning, and implementing a visualization is like teaching yourself."

cancel ×

50 comments

Sorry! There are no comments related to the filter you selected.

Beautiful and fascinating (2)

yorgo (595005) | about 2 months ago | (#47332199)

Wonderful stuff. Reminded me of this site: http://www.informationisbeauti... [informatio...utiful.net] (beautiful ways to view typically boring stuff).

Al Gore Rhythms (0)

rossdee (243626) | about 2 months ago | (#47332205)

The cycles of global warming

His immortal quote (0)

Anonymous Coward | about 2 months ago | (#47332369)

From the standpoint of governance, what is at stake is our ability to use the rule of law as an instrument of human redemption. [nytimes.com] -
-Al Gore, NYT, 2/28/2010 "We Can’t Wish Away Climate Change"

'Human redemption', Al. Really? Is your theological head lodged that far into a lightless location?

/. in panic censorship mode (0)

Anonymous Coward | about 2 months ago | (#47332207)

if we change how we look at things the things we look at will change

Re:/. in panic censorship mode (1)

ebyrob (165903) | about 2 months ago | (#47332265)

What you mean if I turn my head, the scenery changes? How unexpected! Who would ever think focus was important ... or choice?

change the secnario (0)

Anonymous Coward | about 2 months ago | (#47332309)

check things out as they really are as opposed to the media hypenosys maybe?

Re:change the secnario (1)

ebyrob (165903) | about 2 months ago | (#47332545)

It's ironic you post this on this particular article. It highlights how the placement of receptors in your eye effects how you see the world. What makes you think your own in-built bias (apparently culture-jamming) is any less than anyone else?

I drive many other programmer's batty because when they ask me for help the first thing I do is "survey the scene", the code surrounding their point of interest, rather than listen to anything they think or *know* about what is happening. Once I have my bearings, and a basic outlook of the subject matter, I'll hear them out. It may seem 5 minutes wasted, but I've solved many 4 hour problems in 10 minutes this way.

What makes you think I treat the media any differently?

Re:change the secnario (1)

retchdog (1319261) | about 2 months ago | (#47332871)

A group I was working with had this strange phenomenon where their windowed machine learning algorithm would just crap out on certain training sets.

A few weeks later, I was presenting my results and casually mention that, hey, the dataset i got is just outright missing 20% of the data, but there's still enough to illustrate the results, next slide. One of the leads asks, "wait. why didn't anyone notice this?"

I had assumed that it was just my dataset. Nope, turns out that the data just wasn't there at all (this was our partner's fault). It's not quite as trivial as it may seem, since it was point data and required some amount of interpolation. It was just that the interpolation script was intended to fill in, oh, a few hours or maybe days, but it was filling in a year or two.

Back on topic, I noticed it at first because I plotted the covariates (apparently I was the first to do this) and noticed that they were rock-still for huge sections. Sometimes the problem isn't with the code, or even the theory. Visualizing fixed algorithms may or may not be helpful, depending on audience and application, but even a crude visualization of the actual problem and data can almost never hurt. If nothing else, it gives you actual slides to show people. Not really knowing what's going into your algorithm is generally a bigger problem than choosing the "right" algorithm. Of course, sometimes the data really is too big/complex to plot, so you have to look at various algorithmic reductions, which can always make things interesting.

Now to try to parlay my apparently rare insights into cash.

The book is always better than the movie (1)

ebyrob (165903) | about 2 months ago | (#47332247)

I'm sure this is fun cool stuff to play with but... I'm pretty sure my imagination is still better than what a computer screen can show me. "Playing computer" is one of the first practices most new programmers learn, and if you're good at it, it is one of the most powerful tools in your arsenal.

Here's hoping "kids these days" don't skip out on the importance of programmer's imagination over these new fangled tools.

Re:The book is always better than the movie (1)

sribe (304414) | about 2 months ago | (#47332401)

Here's hoping "kids these days" don't skip out on the importance of programmer's imagination over these new fangled tools.

Here's hoping they do. Us old farts need all the advantages we can get.

Re:The book is always better than the movie (2)

ebyrob (165903) | about 2 months ago | (#47332631)

Actually, after reading the article, I'd call what he's doing extremely good basic engineering and model design/view. It's very cool for the problems he presents, but looks like a ton of work. To be generally useful, it seems he'd have to come up with rules of thumb and generalizations for what it is typically important to see/understand in a given algorithm and a way to identify *what* to model/visualize that isn't completely subjective.

He appears to make some great subjective decisions on what/how to do things, the problem in general is many developers are poor in certain subjective areas. The real gem will be how to reduce some of the less useful subjectivity and turn it into things that are more objective and better subjectively through practice of useful rules and guidelines.

Consuming this stuff is really just letting another engineer do the heavy lifting FOR you. ... like watching a movie and letting Hollywood do the imagining FOR you...

I mean, this is a great movie, don't get me wrong, I'm just glad I read the book first.

Re:The book is always better than the movie (1)

werepants (1912634) | about 2 months ago | (#47334057)

I think the overall point he's making is that visualizing an algorithm's behavior can offer us better insight, faster, vs. just looking at our code and our error logs. I'm sure there are ubermensch programmers out there that never have their programs exhibit unexpected behavior, and always understand exactly why a test fails, but I'm not one of them.

I encountered this firsthand when I spent a couple of days trying to write a simple algorithm to detect clipping on an oscilloscope output. We had a secondary scope set up to trigger at the clipping level for the first, and so my target was to find as many clipping events as triggers on the second scope. My first several attempts failed, and I never got a direct match no matter what I was doing. Finally, I spent an hour or so coding up a visualization that would show the waveforms and pinpoint exactly where the program thought it detected clipping, and it became immediately clear to me that my initial assumption for my test was invalid - there were many times that the backup scope triggered without clipping on the first, and that clipping happened on the first without a corresponding trigger on the backup scope. It turned out that this was because of big transients showing up while the first scope was rearming, and vice versa - but without actually looking at the data and behavior of the program I would have kept wasting time thinking my algorithm was broken.

Of course, that's a unique situation, but I think the point still stands that our brains have a very powerful capacity to process visual information, and that sometimes an hour or two to slap together a visualization can pay for itself pretty quickly. Once you get familiar with the tools to make these kinds of visualizations, it can become very straightforward to develop one for your specific use case.

Re:The book is always better than the movie (1)

ebyrob (165903) | about 2 months ago | (#47349519)

> Once you get familiar with the tools to make these kinds of visualizations, it can become very straightforward to develop one for your specific use case.

That's the thing, we need a Visualization for Understanding 101 class in comp-sci or something similar.

I guess I had scientific modeling in physics as part of EE but over half the focus was on how to gather/deal with the physicalities of real-world data, which isn't so important when you're modeling something which lives inside a computer to begin with.

Sure, various senior projects etc require a presentation, but generally that's a "sink or swim" kind of thing rather than help and practice building tools for presenting esoteric information in an illustrative manner.

Note: I thought it was obvious reading digital output as analog (or merely hooking together input to output on two sensitive instruments) is always going to cause a lot of artifacts and distortion. You don't chain 2 microscopes together and expect to get twice the magnification with no problems...

Re:The book is always better than the movie (1)

werepants (1912634) | about 2 months ago | (#47353877)

Note: I thought it was obvious reading digital output as analog (or merely hooking together input to output on two sensitive instruments) is always going to cause a lot of artifacts and distortion. You don't chain 2 microscopes together and expect to get twice the magnification with no problems...

Maybe my description wasn't clear. These were two oscilloscopes, reading from the same source in parallel. One scope was looking at a high resolution, set to trigger on anything above noise, but with something like a 1V maximum amplitude (hence clipping at 1V). The second was set to trigger at anything over 1V, and had a much larger view window, so there was no data lost to clipping.

The motivation here was to get detail data while simultaneously making sure to capture the full amplitude of big signals. Neither scope had an appreciable impact on the signal of the other. Once I figured out the problem, which was that there was not in fact a 1:1 correspondence between clipping on scope 1 and triggers on scope 2, a good-enough algorithm was stupidly simple - just check if the signal reached its maximum for >3 consecutive samples, and it performed about as well as a human would.

Visualize (2)

Bengie (1121981) | about 2 months ago | (#47332271)

I've always visualized what's going on, this is how I do everything. Doesn't everyone program this way? Think about something for a while, building up the model in your head, then visualize the interactions among the parts. If the problem is too complex to visualize, then I simplify and add abstractions until I can visualize it. This iteration naturally creates simplified abstract layers in my code.

I debug most of my code this way also. When I'm internally visualizing stuff, I lose track of what's going on around me. I wouldn't be surprised if a brainscan would show activity in my visual cortex.

Re:Visualize (0)

Anonymous Coward | about 2 months ago | (#47332405)

Stupid people think they are average. Intelligent people besides the assholes at Mensa think so as well. Let me assure you that most programmers can't visualize anything beyond their next pay.

Re:Visualize (2, Interesting)

Anonymous Coward | about 2 months ago | (#47332895)

First the light is, and then I see the yarns and the strings that link them. Then the strings start vibrating and I can hear them talk to each other. Then I hear the races in the polyphony and I reach my arms out to the strings. I sip from the source and smell the algorithms. It is only then that the notes come to my fingers.

Re:Visualize (1)

sribe (304414) | about 2 months ago | (#47332417)

Doesn't everyone program this way?

No, actually ;-)

There seem to be at least a couple of distinct ways of "sensing" the code you're working on. (And I'm not even counting the poor schmucks who are never really able to understand what they're doing.)

Re:Visualize (0)

Anonymous Coward | about 2 months ago | (#47332627)

I always used to visualize everything; less so now, I seem to have some internal circuitry just for code now, don't know how it works but I can just understand code. I think it's related to symbolic reasoning or something, it feels similar to when I'm doing math, but it's mostly unconscious now. Only if I get stuck do I start drawing diagrams, and then I return to the old visual reasoning I used to use. I have about half a dozen diagrammatical format I use: dataflow/dependency graph, timeline, cartesian space, matrix, and probably others. My desk is covered with 3-month old examples of each :p (Programmer for about 30 years, btw.)

Re:Visualize (0)

Anonymous Coward | about 2 months ago | (#47337553)

30+ years here also. Not programming much at work, but enough to keep me up at nights ;)

When I first started out, I visualized code structure (mostly OO-stuff) on paper, and that got me started on classes, methods and so on.
However, very quickly, I didn't need the visualization. Indeed, the code itself is enough.
Isn't it called a programming _language_ after all?
Yes, I think it's something like mathematics, or just reading and writing - maybe even drawing.
Done enough of that so that I often find the appropriate tradeoffs right off the bat now. There are no perfect designs, just something that solves the problems well enough to not need change for some foreseeable time.
However, people work and think very differently. Some may rely more on visual cues, or other forms of crutches.

At work now, I now mostly program human beings. The thing is, people are very different, and they particularly enjoy creating more work and problems for themselves. If you stop or alert them, they will ignore you, and continue right into the wall. So there also, you need to find a problem to solve, and just focus on what is your responsibility (ie. what topics people are willing to listen to you). Often drawings with some texts can explain a whole lot more than pages after pages of text.

When communicating, with a computer or a human being, simplicity is key. Less is more, making it easy to comprehend, preventing misunderstandings and fast and less error-prone to change. Above all: Clarity can be achieved. A state where doubts and confusion are minimized. People can get into zone, or just get it over with very fast, lessening time-consuming double-work and procrastination.

The same way automation can allow you to be "lazy", clarity can as well, as _people_ will start working for/with you. They just need the right "hook". Here, experience and position / patience is key.

Visualization is one of many tools. But as the author also states, it can mislead. A deeper intuitive understanding from experience, along with some simple todo-lists and figures, do in most cases work wonders. Indeed, the less is more adage frees people to solve problems on their own, but towards the clear goals you've set up.

What most leaders miss is this: In order to lead people, you should be experienced, proficient and motivated, not just "good with people". If you don't know where to go, how to get there or what is realistic, how can you lead?

There's going to be a paradigm-shift in leadership more and more now, because the right people to lead are the people who have been "on the floor". Companies led by such people are the ones who can plan for success, not just talk about it.

It's the same with these visualizations: If you don't know what you're doing, you can get severely misguided. But the right intuitive understanding, can provide the perfect visual example to properly solve whatever needs to be settled.

Re:Visualize (1)

ebyrob (165903) | about 2 months ago | (#47332695)

I can't speak for anyone else, but I'm definitely this way. When I get to work in the morning I return to the massive subjective machano-city in my head where the repository lives and plug it in to the objective reality that is Mercurial. Much of bug-fixing is ferreting out the differences between the two. Reading code is removing the fog of the unknown in my head and writing is creating new streets, buildings, and even neighborhoods. (Well... You could hardly call it something so terrestrial, more like cogs of motility and points of inflection or something even more abstract but you already have the idea.)

Re:Visualize (1)

Bengie (1121981) | about 2 months ago | (#47333037)

I'm not surprised, but it's interesting how you describe your visualization doesn't quite match how I "see" them. I would think it to be a cool topic to categorize different types of visualizations and finding correlations with who knows what.

Assuming I'm taking this the correct way, I find there to also be a "fog" of the sort when I work with stuff. I would describe my visualization as a "living" yet static picture. Different "parts" of the picture interact with other near-by parts, yet nothing actually "moves". When I look at the entire picture, I can see "blurry" parts, which I would analogize to your "fog". By looking at near-by parts/sections of the picture, I can narrow down what I am missing.

Some parts of a picture may look "funny" or "strange", and I usually find a bug or a potential issue in that part that I didn't notice before. I write a decent amount of multi-threaded programs, and this is how I typically find race conditions or high contention areas.

Since parts near each-other are related or connected in some fashion, if I change how one part works, I can quickly see how the interactions of the other parts change. This can cause a domino affect in my mind about what other changes will need to happen to accommodate the one change.

This is probably about the best of how I can describe it, but I find it strange myself. It's like a world inside my head. Even when I was young, this is how I did things. Because this is how I think about almost everything, I have had certain types of issues with classes when I was in school. I can visualize interactions well, but I have a hard time regurgitating raw facts.I did better in advanced topics that required understanding, than simple topics that mostly revolved around memorizing. An example was I failed Bio 101, but when I first talked to my bio teacher and before they realized who I was, she thought I was a Sophomore or Jr in the Bio major. She couldn't quite grasp how I could show great understanding of the subjects, yet fail the tests which revolved around simple memorization.

Re:Visualize (1)

drkstr1 (2072368) | about 2 months ago | (#47337331)

Holy crap, you just described in perfect detail the way my brain works. I've never really been able to describe it, but you just did it perfectly... even right down to your past experiences in school. I particularly likes how you describe it as a living, yet static picture. The best I've always been able to describe it is; "I have a lot of RAM, but a slow CPU." Someone should make a myers-briggs-like classification specifically for engineers (predominately INTP/INTJ types?) . There are clearly some common thinking patterns at work here, and it would be interesting to see how they affect the way people tackle engineering problems, and what type of problems their brains are best suited for.

Re:Visualize (1)

khellendros1984 (792761) | about 2 months ago | (#47337813)

I identify with your school experiences, although maybe not in as extreme a way; I could generally muddle through rote memorization to a certain degree, but my retention was terrible. The understanding was left behind when the specifics faded away...

Anyhow, for me, the "picture" exists, but it's more tactile than visual. There are visual aspects, but it's not how I process most of the information. Loops are spinning wheels when they don't have a clear exit condition, and feel like unrolled spirals when they're "for" loops going over specific ranges. Algorithms seem like they have a size/weight, which corresponds to my idea of how quickly they'll run on a given set of data (although it's not always accurate, yet).

If I don't remember how a section of code works internally, it feels hollow, and when I read the code, it's like looking inside the black-box. If I change something outside the box, I feel the domino effect, and when it hits the box, I need to look inside to see what'll fall over. I can also feel like threading some string through the eye of a needle, when I'm running some value up through a class hierarchy, or something.

I think that the important insight is that a lot of us become very skilled at constructing mental models of what we're working with, and gain some sort of sensory perception (often vision-related) of how the model functions. I think it's telling that (in my case) the world falls away from my perception when I'm working through a complex problem, and closing my eyes sometimes helps, as well.

Re:Visualize (1)

LongearedBat (1665481) | about 2 months ago | (#47333121)

Well, yes, but ...

when the animations end, the resulting mazes are difficult to distinguish from each other. The animations are useful for showing how the algorithm works, but fail to reveal the resulting tree structure.

Sure, I visualise how algorithms work, otherwise I can't code them. But the difference in maze generators struck me. Because, as quoted, they're difficult to distinguish from each other.

Other things, such as sorting algorithms, are fairly obvious though... but still nice to see, again.

Re:Visualize (1)

swillden (191260) | about 2 months ago | (#47335777)

That's not what the article is about. It's about generating graphical visualizations of what your code is doing on a computer screen, not generating internal visualizations of what you think your code is doing in your head. In fact, I'd guess that most of the value of the former comes from noticing the ways in which it's different from the latter.

Sure... (1)

jones_supa (887896) | about 2 months ago | (#47332367)

Many people reading this site probably have a functional understanding of how algorithms work.

Humorist.

Whoever wrote the summary is a fucking idiot (-1)

Anonymous Coward | about 2 months ago | (#47332397)

Sounds like shitty "science" journalism. SoulSkill, timothy, and samzenipus can all go suck on a fetid cock after fucking themselves in the ass with it.

Pretty, but misleading. (0)

Anonymous Coward | about 2 months ago | (#47332399)

Algorithms should be executed by computers, not developers. Viewing how an algorithm works reinforces procedural thinking which is a dangerous habit for students. The whole point of teaching recursion with the Towers of Hanoi is to show that procedure is best done mechanically, while the programmer only has to think about the correctness of the algorithm. Once a student capitulates and really gets the point about building correctness, not procedure, then sure, show them the mechanical fall-out. But you would not believe how many professional programmers are painfully limited in what they do by their belief in what algorithms are.

Re:Pretty, but misleading. (2)

UnknownSoldier (67820) | about 2 months ago | (#47332633)

This is total nonsense.

Algorithms are first _designed_ BY humans. Algorithms can be _optimized_ FOR computers.

Visualization is a way to augment understanding, not replace it.

There are 4 primary ways of learning:

* visual,
* auditory,
* kinesthetic, and
* mental.

Students have various ways that work "best" FOR THEM. Saying visualization is dangerous shows your ignorance about the subject. If you had seen the excellent move Temple Grandin you would understand that not everyone thinks the same way. [imdb.com]

Re:Pretty, but misleading. (1)

ebyrob (165903) | about 2 months ago | (#47332883)

Hmm... Get much performance out of your systems that way? Procedural methods have a time and place, and have given us most of what we have now. Functional methods have given us... A pale, slow, anemic cousin of what procedural has done? (that can't fail in-theory, much like the titanic...)

Honestly, garbage collection did way more for the industry than any other sea-change... Here, was some scut work we could freely let the computer do completely for us. (At least when any kind of hard time-keeping doesn't matter)

I don't see how NOT understanding how computers work is going to make them better. Seriously. (I mean, if you want to skip steps from the design cycle, why not simply use genetic algorithms to implement everything, then you don't need design either and can go strait from requirements to testing and don't need to worry about anything else.)

15 Sorting Algorithms in 6 Minutes (2, Interesting)

Anonymous Coward | about 2 months ago | (#47332477)

https://www.youtube.com/watch?v=kPRA0W1kECg

Re:15 Sorting Algorithms in 6 Minutes (1)

PPH (736903) | about 2 months ago | (#47332933)

Hungarian bubble sort [youtube.com]

A great book for learing D3.js (2)

nullchar (446050) | about 2 months ago | (#47332499)

I'm not affiliated with the author in any way, but I did buy the book (though you can get it for free).

This is an amazing resource for someone new to D3.js [d3js.org] 's declarative javascript and helps you put it all together: https://leanpub.com/D3-Tips-an... [leanpub.com]

After using D3.js, I've come to the conclusion Mike Bostock is awesome! But it doesn't stop there, people have expanded it like Crossfilter [github.io] and dc.js [github.io] .

Tech that allows a javascript n00b like myself to build a simple race results visualization [nullchar.net] .

Magic Eye (2)

MrP- (45616) | about 2 months ago | (#47332611)

The colorized maze ones remind me of the magic eye posters. I think I see a schooner!

This is fundamental (1)

azav (469988) | about 2 months ago | (#47332743)

in explaining how things really are.

Light is a continuous signal. It must be sampled at an interval to be determined and interpreted. It actually flows like a river. Amazing.

Re:This is fundamental (1)

SuricouRaven (1897204) | about 2 months ago | (#47332973)

Like many things in physics, it's really a discrete signal at a rate so high as to be continuous for all practical purposes. If you could get your sampling rate ridiculous enough, you'll start to detect individual photons.

Re:This is fundamental (0)

Anonymous Coward | about 2 months ago | (#47333295)

Or just turn the light down far enough. Modern CCDs can detect individual photons.

Re:This is fundamental (0)

Anonymous Coward | about 2 months ago | (#47333313)

As usual, your word salad is meaningless crap. We've detected single photons with vacuum tubes decades ago. No "ridiculous" sampling required.

Re:This is fundamental (1)

Bengie (1121981) | about 2 months ago | (#47334007)

I don't think he meant the existence of a photon at a certain time, but to actually be flooded with photons, like from a spot light, and still be able to distinguish the individual photons.

Some place, like MIT or something, was playing with a 1 trillion FPS camera, and by processing the individual photons, they could see around corners to a certain degree. They could use non-polished surfaces like a minor.

Nice work, but done many times before. (0)

140Mandak262Jamuna (970587) | about 2 months ago | (#47332781)

This is interesting work, and very well presented no doubt. But it shows why your PhD guru is making you spend seemingly unreasonable time doing literature surveys. At first glance this work seems to be very close to solution adaptive meshing techniques used in computational physics.

Using a bunch of sample points to represent a function is fundamental to computational physics. Stress Analysis, Colorful oops Computational Fluid Dynamics, Computational Electro Magnetics etc etc. Solution adaptive meshing is a very popular technique in these algorithms. Make a crude mesh and compute a crude solution, use the gradients in the function to determine where the "cells" are too large or the representation is too poor. From there we go to "p" refinement where we jack up the order of mode shapes in the finite element, or "h" refinement where we refine the mesh by adding points, or "r" refinement where we move the mesh points from less important regions to more important regions.

In the "h" refinement technique one would insert points based on cell-centroid, cell-circumcenter or longest-edge-bisection etc.

This work, which seems to be 2D, these techniques are first published back in 1980s, and it was extended to 3D in 1990s.

The commercial CEM package made by Ansoft to solve 2D electromagnetics called Maxwell was using the Voronoi polygon based refinement of 2D meshes. It shipped in 1990. They were doing 3D Voronoi polyhedron based solution adaptive refinement of sample points in 1993 version of their 3D product HFSS.

http://www.google.com/search?q... [google.com]

Re:Nice work, but done many times before. (1)

Anonymous Coward | about 2 months ago | (#47333411)

There's no claim to novelty for any of the algorithms. If you scroll down you'll also see imagery of quicksort: stop the presses, this was invented in 1960. The point is the graphical presentation of existing algorithms. This is all made very clear in the prose, and in the summary, and in the title. So well done Captain Blowhard, top marks for your knowledge of a somewhat related domain (although what you're talking about really has fuck all to do with Poisson-disc distributions) but minus several million for basic reading comprehension failure.

Re:Nice work, but done many times before. (0)

Anonymous Coward | about 2 months ago | (#47334727)

There has also been audible "visualization" of algorithms whereby you can hear whats happening.

Re:Nice work, but done many times before. (0)

Anonymous Coward | about 2 months ago | (#47334747)

Oh, I see that he does mention some under "Related work", just gotta read closely enough.

Good and Fail at the same time (0)

Anonymous Coward | about 2 months ago | (#47332877)

I do agree this is very cool. However, couple years ago, someone tried to do visualization of Calculus (don't have the link,but I am sure I read about on Slashdot) with relative success of explaining simple ideas like volume and areas. But past simple ideas, it failed to "visualize" abstract ideas like substitution.

Reminded me of a great TRS-80 Sort comparison (0)

Anonymous Coward | about 2 months ago | (#47333203)

It was a simple visualization of different sorting algorithms by using the memory mapped screen as an array and filling it with random visible ascii characters. Then you could watch as each sort moved the values around on the screen. I was very new to programming but it gave me a real sense of how many different ways you could do one thing. I also gained a great understanding of how different sort algorithms worked.

Visualization for evaluating randomness (2)

Lorens (597774) | about 2 months ago | (#47334123)

Visualization is also great for evaluating randomness; remember the images of broken RNG implementations a few years ago? http://lcamtuf.coredump.cx/new... [coredump.cx]

Not how I visulaize algorithms (0)

Anonymous Coward | about 2 months ago | (#47334129)

You kiddies probably don't even know what a flow chart is, much less how to do a good one....

                    mark

Help Wanted (1)

lazy genes (741633) | about 2 months ago | (#47338153)

I have been working on an algorithm that predicts the outcome of sporting events based solely on weather variables. I have improved each year and now I am winning handicapping contests and making money every year. I have developed another algorithm that predicts the temperature that is based solely on lunar cycles. I know that they can be improved, but I lack the knowledge and skills to do it. If anyone is interested I would be happy to share what I discovered.
Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>