Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

MapReduce Goes Commercial, Integrated With SQL

kdawson posted more than 6 years ago | from the patterns-in-the-data dept.

Databases 99

CurtMonash writes "MapReduce sits at the heart of Google's data processing — and Yahoo's, Facebook's and LinkedIn's as well. But it's been highly controversial, due to an apparent conflict with standard data warehousing common sense. Now two data warehouse DBMS vendors, Greenplum and Aster Data, have announced the integration of MapReduce into their SQL database managers. I think MapReduce could give a major boost to high-end analytics, specifically to applications in three areas: 1) Text tokenization, indexing, and search; 2) Creation of other kinds of data structures (e.g., graphs); and 3) Data mining and machine learning. (Data transformation may belong on that list as well.) All these areas could yield better results if there were better performance, and MapReduce offers the possibility of major processing speed-ups."

Sorry! There are no comments related to the filter you selected.

Um, first question: WTF is MapReduce? (5, Funny)

Anonymous Coward | more than 6 years ago | (#24756325)

and can I run Linux on it? Or it on Linux? Is it available for my iPhone?

Re:Um, first question: WTF is MapReduce? (4, Funny)

spun (1352) | more than 6 years ago | (#24756379)

MapReduce is the algorithm used to determine the optimum folding pattern used to reduce a standard road map back into its folded state. Duh.

Re:Um, first question: WTF is MapReduce? (2, Funny)

Anonymous Coward | more than 6 years ago | (#24757671)

Why can't they just look at the creases? Duuuuuuh.

Re:Um, first question: WTF is MapReduce? (4, Funny)

AmberBlackCat (829689) | more than 6 years ago | (#24757735)

I thought those were like Rubik's Cubes where you just rip them apart and put them back together right.

Re:Um, first question: WTF is MapReduce? (1)

spazdor (902907) | more than 6 years ago | (#24767161)

That problem has already been solved by a collaboration of millions of computers! Haven't you ever heard of Folding@Home?

"Ok, what about accordion-style from the leftmost edge, with a vertical fold at the beginning!?

Re:Um, first question: WTF is MapReduce? (1)

zevans (101778) | more than 6 years ago | (#24801823)

MapReduce is the algorithm used to determine the optimum folding pattern used to reduce a standard road map back into its folded state. Duh.

Coded for, we assume, on the Y chromosome only.

Re:Um, first question: WTF is MapReduce? (4, Informative)

AKAImBatman (238306) | more than 6 years ago | (#24756393)

Good question. I had to look it up [wikipedia.org] . (Would it have killed the submitter or editor to include a link?)

Basically, the software gets its name from the list processing functions "map" (to take every item in a list and transform it, thus producing a list of the same size) and "reduce" (to perform an operation on a list that produces a single value or smaller list). The actual software has nothing to do with "map" and "reduce", but it does to tokenization and processing on massive amounts of data.

Presumably the Map/Reduce part comes from first normalizing the items being processed (a map operation) then reducing them down to a folded data structure (reduce), thus creating indexes of data suitable for fast searching.

Re:Um, first question: WTF is MapReduce? (1, Informative)

Anonymous Coward | more than 6 years ago | (#24757125)

If my memory's right, this java api for doing grid computing uses this pattern and gives quite a good explanation about it (I think it was developped by google)
http://www.gridgain.com/

Re:Um, first question: WTF is MapReduce? (5, Informative)

severoon (536737) | more than 6 years ago | (#24758825)

Map-Reduce is definitely a technique related to grid computing, but they are not one and the same.

The most popular (to my knowledge) open source Java library implementing MR is Hadoop [apache.org] .

Here's the algorithm in a nutshell (anyone who knows more than me, please correct, and I'll be forever grateful). I have a bunch of documents and I want to generate a list of word counts. So I begin with the first document and map each word in the document to the value 1. I return each mapping as I do it, and it is merge-sorted by key into a map. Let's say I start with a document of a single sentence: John likes Sue, but Sue doesn't like John. At the end of the map phase, I have compiled the following map, sorted by key:

  • but - 1
  • doesn't - 1
  • like - 1
  • likes - 1
  • John - 1
  • John - 1
  • Sue - 1
  • Sue - 1

Now begins the reduce phase. Since the map is sorted by key, all the reduce phase does is iterate through the keys and add up the associated values until a new key is encountered. The result is:

  • but - 1
  • doesn't - 1
  • like - 1
  • likes - 1
  • John - 2
  • Sue - 2

Simple. Stupid. What's the point? The point is that the way this algorithm divides up the work happens to be extremely convenient for parallel processing. So, the map phase of a single document can be split up and farmed out to different nodes in the grid for processing, which can be processed separately from the reduce phase. The merge-sort can even be done at a different processing node as mappings are returned. Redundancy can be achieved if the same document chunk is farmed out to several nodes for simultaneous processing, and the first one that returns the result is used, the others simply ignored or canceled (maybe they're queued up at redundant nodes that were busy, so canceling means simply removing from the queue with very few cycles wasted). Similarly, because the resulting map is sorted by key, an extremely large map can easily be split and sent to several processing nodes in parallel. The original task of counting words across a set of documents can be decomposed to an ridiculous extent for parallelization.

Of course, this doesn't make much sense to actually do this unless you have a very large number of documents. Or, let's say you have a lot of computing resources, but each resource on its own is very limited in terms of processing power. Or both.

This is very close to the problem a company like Google has to solve when indexing the web. The number of documents is huge (every web page), and they don't have any super computers—just a whole ton of cheap, old CPUs in racks.

At the end of the day, Map-Reduce is only useful for tasks that can be decomposed, though. If you have a problem with separate phases, where the input of each phase is determined by the output of the previous phase, then they must be executed serially and Map-Reduce can't help you. If you consider the word-counting example I posted above, it's easy to see that the result required depends upon state that is inherent in the initial conditions (the documents)—it doesn't matter how you divide up a document or if you jumble up the words, the count associated with each word doesn't change, so the result you're after doesn't depend on the context surrounding those words. On the other hand, if you're interested in counting the number of sentences in those documents, you might have a much more difficult problem. (You might think you could just chunk the documents up at the sentence level, but whether or not something is a sentence depends upon surrounding context—a machine can easily mistake an abbreviation like Mr. for the end of a sentence, especially if that Mr. is followed by a capital letter which could indicate the beginning of a new sentence...which it almost always is. Actually...if you're smart you can probably come up with a very compelling argument that this is a very bad example. Which means you're also smart enough to substitute in your own, better example. :-p )

So the ability to use Map-Reduce depends greatly on whether the thing you're after depends upon context within or surrounding the problem statement. Which means that if you can figure out a way to transform a given problem in a way that somehow incorporates that context, you've rewritten the problem in a way that can now be attacked by Map-Reduce. So, my suggestion would be:

  1. Choose a serial problem
  2. Rewrite it in the right kind of way
  3. ????
  4. PROFIT!!!!11!1eleventy!!omgwtfbbq!

Re:Um, first question: WTF is MapReduce? (4, Informative)

Anonymous Coward | more than 6 years ago | (#24759819)

This classic word count example by Google is exactly what Aster demonstrated in their webinar via a live demo of their In-database MapReduce software:

http://www.asterdata.com/product/webcast_mapreduce.html

Re:Um, first question: WTF is MapReduce? (1)

gslavik (1015381) | more than 6 years ago | (#24760817)

I don't think that there is sorting going on in MapReduce (from what I've read). Could be that I missed something ...

Re:Um, first question: WTF is MapReduce? (1)

severoon (536737) | more than 6 years ago | (#24770723)

I only have a passing familiarity with Map-Reduce, so I'm definitely not an authoritative source. It's definitely possible that sorting isn't part of the algorithm itself, but rather one example of context around how it's often implemented. It definitely makes sense, though—why not merge-sort the results as mappings are returned? If you do implement it this way, it just makes it possible to deal with really large maps that need to be spread over multiple nodes.

Re:Um, first question: WTF is MapReduce? (1)

jonaskoelker (922170) | more than 6 years ago | (#24763347)

I'm not quite entirely sure what you mean by the verb "map", the noun "map", and in which sense you use it in each instance. Also, I'm unsure why you think sorting enters into it.

My understanding of MapReduce is that it's (surprise!) all about applying the higher-order functions map and then reduce. Here's what they do:

Map takes a function f and a list [x_1, ..., x_n], then returns [f(x_1), ..., f(x_n)]. That is, it applies f to all the elements of the list. [variants takes multi-argument functions and multiple lists].

Reduce takes an associative operator ++ and a non-empty list [y_1, ..., y_n] and returns y_1 ++ y_2 ++ ... ++ y_n. [Variants take an initial value, and may accept empty lists then.]

Example: you want to know the sum of the squares of 1 through k, and have a list [1, ..., k]. You can evaluate Reduce(addition, map(squaring, [1..k])) to get exactly what you want.

So, what's the big fuss about Google's MapReduce? It's an implementation of map and reduce that works in parallel on many machines; note that if f has no side effects, you can compute f(x_i) independently from f(x_j). Also, if you know f(x_i) and f(x_{i+1}) then you can compute f(x_i) ++ f(x_{i+1}) without worrying what happens elsewhere in the reduce job.

Also, Google probably uses it for something other than computing sums of lists of numbers. Especially the ones that have closed form expressions ;)

Re:Um, first question: WTF is MapReduce? (1)

severoon (536737) | more than 6 years ago | (#24770793)

map, v. - to perform a mapping

map, n. - a collection of mappings

I think you describe the nuts and bolts of the algorithm...but that's not really that helpful when it comes to understanding the usefulness.

The big fuss about map-reduce (not necessarily Google's) is that we've pretty much hit the speed limit in single core processing power. 4GHz is about it...it's not going to get any faster for some time. Unfortunately, most programs are written to only run on a single core, so adding more cores is only going to get you so far. If you want truly distributive load at a low level of granularity, map-reduce can contribute to a compelling story.

Re:Um, first question: WTF is MapReduce? (5, Informative)

jbolden (176878) | more than 6 years ago | (#24757601)

Here is the connection between map and reduce.

In programming

map takes a function from A to B, a list of A's and produces a list of B's

reduce are associative fold functions. They take a list of B's and an initial value and produce a single C.

Like say for example MAP a collection of social security numbers to ages and then select (REDUCE TO) the maximum age from the collection.

Now there are results called "fusions" which allow you make computational reductions for example:
foldr f a . map g = foldr (f.g) a

So in other words the data set is being treated like a large array using array manipulation commands.

Re:Um, first question: WTF is MapReduce? (2, Informative)

Jack9 (11421) | more than 6 years ago | (#24757757)

Google's mapreduce framework has a native resource manager that's aware of what resources are available, aware of failures, and is prepared to reschedule failed processes and where (and when?) to direct finished tasks. Basically it's a job que for distributed processing using a private network. MapReduce is just one tool. You aren't going to get much out of it after you max out your local machine's processing until you start work on the rest of it. What's really scary is that MySQL announces that they finally discovered the ancient algorithm of multithreaded recursive aggregation, "Hey look, in some cases MySQL wont waste processing power!" //i'm a mysql fanboy, but this is really an embarassing announcement

Re:Um, first question: WTF is MapReduce? (1)

fimbulvetr (598306) | more than 6 years ago | (#24758625)

Got a link for the mysql thing you mentioned?

Re:Um, first question: WTF is MapReduce? (4, Funny)

Jack9 (11421) | more than 6 years ago | (#24758815)

I'm a little dyslexic. I immediately see the wheelbarrow as a MySQL icon (which is almost universally a MySQL article) and read _M_apReduce into SQL = MYSQL in the title. This is proof I'm a reactionary blowhard who often fails to comprehend the summary, much less read the article.

There is no link because my wrongometer is not working, it has melted through its resin casing.

Re:Um, first question: WTF is MapReduce? (1)

FilterMapReduce (1296509) | more than 6 years ago | (#24758455)

Basically, the software gets its name from the list processing functions "map" (to take every item in a list and transform it, thus producing a list of the same size) and "reduce" (to perform an operation on a list that produces a single value or smaller list).

As does my Slashdot user name. Great, now everything is going to think I'm calling on people to "filter" this software somehow, which I'd never heard of before this story. And it's "highly controversial", that's helpful.

Re:Um, first question: WTF is MapReduce? (0)

Anonymous Coward | more than 6 years ago | (#24761421)

So dude how can we filter MapReduce and why do you want us to do it?

Re:Um, first question: WTF is MapReduce? (0)

Anonymous Coward | more than 6 years ago | (#24759585)

Pfffffffffffttttt
Map reduce, in my day it was called LISP....

GET OF MY LAWN!!!

Re:Um, first question: WTF is MapReduce? (1)

Hurricane78 (562437) | more than 6 years ago | (#24803681)

In Haskell, there is the command "fold" (foldr or foldl) for this. What's so special about this?

Haskell has "map", "filter", "zip" "reverse" and whatnot...
(... why must I think of Missy Elliott songs now?)

Re:Um, first question: WTF is MapReduce? (0, Troll)

moderatorrater (1095745) | more than 6 years ago | (#24756413)

Map reduce: a framework for taking a problem and breaking it up into smaller pieces. As I understand it, Map is the program that decides which server the data gets sent to, Reduce is the program that actually processes it. For google, when you write a query, they send the query to several different servers. Those servers then search their subset of the internet for that term, rank them, and return them. The central server then combines those results and returns them to the user. In this case, the Map program would send the request to the servers and be smart enough to make sure that you don't get duplicate servers. The Reduce program is the one that does the searching and sends them back.

Re:Um, first question: WTF is MapReduce? (-1, Offtopic)

Anonymous Coward | more than 6 years ago | (#24756711)

What drunk moderated parent a Troll?

Re:Um, first question: WTF is MapReduce? (3, Insightful)

Anonymous Coward | more than 6 years ago | (#24757689)

Probably someone who read the post and knows how wrong he is. Like you traverse the web every time you want to look up a search term or how a map is really the same as load balancing...

Re:Um, first question: WTF is MapReduce? (1)

Paradise Pete (33184) | more than 6 years ago | (#24760085)

What drunk moderated parent a Troll?

I'm guessing he figured that the post is so thoroughly wrong that it must be deliberate.

Re:Um, first question: WTF is MapReduce? (4, Funny)

Anonymous Coward | more than 6 years ago | (#24756451)

and can I run Linux on it? Or it on Linux?

Have you ever considered that it might itself be a distro? A, like, super-leet distro that the big Valley firms have been hacking together for the past ten years, only giving access to employees that sign a super-nasty NDA? A disto that traces back to a Photoshop 1.0 plugin for resizing GIFs?

Re:Um, first question: WTF is MapReduce? (1)

jefu (53450) | more than 6 years ago | (#24757241)

MapReduce is just an idiom (pattern if you will) for processing collections (arrays, lists, trees, database tables...) of data. There is often another piece :filter that cuts out bits you don't want to do but that can easily be done in the reduce step, though sometimes it is done somewhere else.

For example, suppose you want to compute exp(x) using the usual Taylor series expansion and 20 terms. Start with the list [0,1,2,3,4,5, .. 19]. Then map the function :
f(i) = x^i / i!
to each entry in the list. Then reduce the list by adding all the pieces. (This is, admittedly, a trivial example and one that would be better done in other ways -- skip the map step and just do a reduce that computes the polynomial using Horner's rule.)

Doing this in code in general can make things easier to read (but not always - sometimes the reduce step can get messy). But suppose you wanted to do something like that on ten million numbers. With a hundred processors, then you could split the numbers up so each processor would have about the same number, the do the maps on each processor, and reduce (on that same processor) all of the mapped values, then collect the values and reduce again. Less data movement and often a much (sometimes much much) less complicated program.

Re:Um, first question: WTF is MapReduce? (0)

Anonymous Coward | more than 6 years ago | (#24757855)

Haha, you bought an !phone.

Everyone point and laugh!

Re:Um, first question: WTF is MapReduce? (1)

Varun Soundararajan (744929) | more than 6 years ago | (#24764661)

and can I run Linux on it? Or it on Linux? Is it available for my iPhone?

First lets figure out if we can run Vista with it. Vista is toooooo slow.

Wah? (0, Redundant)

Smidge207 (1278042) | more than 6 years ago | (#24756343)

'body else read that as 'Injected With SQL'?

=Smidge=

MySQL has no common sense anyway. . . (-1, Troll)

Anonymous Coward | more than 6 years ago | (#24756387)

> been highly controversial, due to an apparent conflict with standard data warehousing common sense

Not like MySQL cared about data integrity in the past. . . whay start now?!

Re:MySQL has no common sense anyway. . . (5, Funny)

SgtPepperKSU (905229) | more than 6 years ago | (#24756463)

Not like MySQL cared about data integrity in the past. . . whay start now?!

Gaaah! Data corruption!
Your post must have been stored in MySQL...

Re:MySQL has no common sense anyway. . . (1)

slimjim8094 (941042) | more than 6 years ago | (#24759679)

As a matter of fact, it was... :/

Mmm.. MapReduce is LISP (2, Insightful)

Anonymous Coward | more than 6 years ago | (#24756425)

People who don't know LISP are bound to reinvent it, badly.

Re:Mmm.. MapReduce is LISP (4, Funny)

geminidomino (614729) | more than 6 years ago | (#24757319)

Well done, AC. You've exposed their dirty little Scheme.

Re:Mmm.. MapReduce is LISP (1, Funny)

Anonymous Coward | more than 6 years ago | (#24757497)

Yes, they're out to Steele LISP's imaginary property.

Re:Mmm.. MapReduce is LISP (0)

Anonymous Coward | more than 6 years ago | (#24762407)

People who don't realize that LISP is just functional programming are bound to assume that LISP is special.

Clearly, MapReduce is conceptually inspired by functional programming languages such as LISP.

But LISP won't distribute your MapReduce instances, or run them efficiently.

Perhaps a good addition to data warehousing (4, Interesting)

MarkWatson (189759) | more than 6 years ago | (#24756441)

Data warehousing (here I mean databases stored in column order for faster queries, etc.) may get a lift from using map reduce over server clusters. This would get away from using relational databases for massive data stores for problems where you need to sweep through a lot of data, collecting specific results.

I think that it is interesting, useful, and cool that Yahoo is supporting the open source Nutch system, that implements map reduce APIs for a few languages - makes it easier to experiment with map reduce on a budget.

Re:Perhaps a good addition to data warehousing (2, Interesting)

roman_mir (125474) | more than 6 years ago | (#24756705)

Except that relational databases are not just indexed objects copied across a large network of cheap PCs. What's good for Google may not be suitable for other databases, who actually care about ACID properties of transactions and not necessarily have the infrastructure to run highly parallel select queries.

Re:Perhaps a good addition to data warehousing (1)

jefu (53450) | more than 6 years ago | (#24757285)

I'm currently working on a project where users will be able to apply different types of transformation and collection to timestamped data and map/filter/reduce style algorithms are perfect ways to give them that capability.

The kind of capability might look something like : give me the average temperature at hourly intervals for each day in the year for a dataset that spans multiple years. In this case there's no map, and the reduce does the work, in other cases this may be turned around.

The data involved is sitting on one processor and not overly large, but a map/reduce view is probably the easiest one for people to understand.

Re:Perhaps a good addition to data warehousing (1)

Zaaf (190878) | more than 6 years ago | (#24761791)

Since the main difference between a RDBMS and MapReduce seems to be that the former is most suited for structured data and the latter best suited for unstructured data, it might be a good fit to use them both. And according to studies [computerworld.com] , it might be that north of 80% of our data is unstructured. This has been a big topic in data warehousing and led to the start of the whole DWH 2.0 thing.

So the fact that MapReduce is used in massive parallel processing machines like the ones from Greenplum (as quoted from the article) is not as bad as Stonbraker and Co. seem to think [databasecolumn.com] .

Zaaf

Re:Perhaps a good addition to data warehousing (4, Informative)

ELProphet (909179) | more than 6 years ago | (#24756749)

Actually, MapReduce doesn't do anything in the way data's stored- it's just a pipe between two sets of stored data, and really just needs an interface on both ends to get the task into MapReduce (which is what it seems the projects TFS/A mention do). BigTable is the storage mechanism that's incompatible with most traditional row-based RDBMSs. GFS is just the underlying storage mechanism.

http://labs.google.com/papers/gfs.html [google.com]
http://labs.google.com/papers/bigtable.html [google.com]
http://labs.google.com/papers/mapreduce-osdi04.pdf [google.com]

Note that all of those were published several years ago- I'd bet dollars to donuts that Google is _WAY_ beyond this internally if it's just reaching commercial use by their competitors.

Re:Perhaps a good addition to data warehousing (5, Informative)

grae (14464) | more than 6 years ago | (#24757739)

If you're interested in one of the sorts of things that Google has done with MapReduce, look no further than Sawzall.

http://research.google.com/archive/sawzall.html [google.com]

Sawzall is essentially designed around the mapreduce framework. It's impossible to *not* write a mapreduction in Sawzall. The way it works:

Your program is written to process a single record. The magic part happens when you output: you have to output to special tables. Each of these table types has a different way that it combines data emitted to it.

So, during the map phase, your program is run in parallel on each input record. During the reduce phase, the reduction happens according to the way the output tables do whatever operation was specified.

There was some work to be done having enough different output tables to do everything that was useful, especially since you might want to take the output and plug it in as the input to another phase of mapreduction.

One of the biggest reasons this was a major innovation for Google was that it let some of the people who weren't really programmers still come up with useful programs, because the Sawzall language was pretty simple (especially when combined with some of the library functions that had been implemented to do common sorts of computations.) There were also some interesting ways in which the security model was implemented, but as far as I know they haven't been published yet.

There certainly are plenty of other technical things that can be done to improve a system like MapReduce (and I know that many of them were in various forms of experimentation when I left the company) but at least some of them are highly dependent on Google's infrastructure, and not really relevant to a general discussion. (I suspect that the papers linked above might have some hints, but it has been a while since I looked at them.)

Re:Perhaps a good addition to data warehousing (1)

tuomoks (246421) | more than 6 years ago | (#24761841)

Correct. I sometimes wonder how many /. readers are really developers? Mapreduce is old, old technology, Google just made it famous and, maybe, documented. It is not always useful in all cases but never worse than any other method in throughput. If you have to "map" information and the more they are unbalanced, the better it gets.

Actually the question about developers came because a lot of replies are talking about API - if you code, write your own, it is very easy once you understand the principle. And I can tell, multiple cpus, parallel processing and mapreduce method was known already in 70's when I had to write data collections for whatever reason.

And yes, BigTable is a (almost) totally different issue but even that is not new, just used by Google in this scale maybe first time. Not sure even of that - huge government systems do sometimes grazy things but don't tell anybody how.

Re:Perhaps a good addition to data warehousing (5, Informative)

owenomalley (103963) | more than 6 years ago | (#24756761)

The correct project name is Hadoop [apache.org] . It was factored out of Nutch 2.5 years ago. And Yahoo has been putting a lot of effort to make it scale up. We run 15,000 nodes with Hadoop in clusters of up to 2,000 nodes each and soon that will be 3,000 nodes. I used 900 nodes to win Jim Gray's terabyte sort benchmark [yahoo.com] by sorting 1 TB of data (100 billion 100 byte records) in 3.5 minutes. It is also used to generate Yahoo's Web Map [yahoo.com] , which has 1 trillion edges in it.

Re:Perhaps a good addition to data warehousing (1)

MarkWatson (189759) | more than 6 years ago | (#24757193)

Cool!! And thanks for the correction.

Re:Perhaps a good addition to data warehousing (0)

Anonymous Coward | more than 6 years ago | (#24757743)

You guys have done an awesome job getting such an ugly and poorly implemented code to perform. Its still down right nasty code, but at least its not dog slow anymore. Congrats on the great job!

Re:Perhaps a good addition to data warehousing (1)

poot_rootbeer (188613) | more than 6 years ago | (#24765093)

The correct project name is Hadoop. It was factored out of Nutch 2.5 years ago

SPEAK

ENGLISH

Re:Perhaps a good addition to data warehousing (1)

targyros (1351955) | more than 6 years ago | (#24760733)

This is a great point. To add to that, the way we see it is that MapReduce serves two purposes:

1) Go beyond SQL. This is not a big deal for transactional databases, where most of the logic is well-expressible in standard SQL. But analytics are another story since there is so much custom logic (how do you implement a data mining algorithm, like association rules, in SQL? It's not easy!)

2) Go parallel. Nobody knew what a good parallel API looked like before Google brought MapReduce and proved its value by using its own systems as guinea pigs. Since our Data Warehouse architecture is natively MPP, MapReduce is a great fit to speed up analytical applications.

The combination of these two possibilities we believe can be revolutionary for Data Warehousing. If you're interested to read more take a look at our blog. [asterdata.com]

Good luck with transactions and map/reduce (1, Insightful)

Anonymous Coward | more than 6 years ago | (#24756475)

they go together like paint and peanut butter.

Map/Reduce is better suited for read-only data mining situations.

First they attack it (3, Interesting)

Intron (870560) | more than 6 years ago | (#24756545)

Re:First they attack it (1)

sohp (22984) | more than 6 years ago | (#24757725)

Mahatma Gandhi actually said, "First they ignore you, then they ridicule you, then they fight you, then you win."

The tool custodians of the massively complex relational database warehouse tools are seeing their world turn obsolete as the lighter weight MySQL and the more flexible mapreduce and the BASE [neu.edu] worlds evolve beyond them, so yes, they are going to kick up a fight. Don't let the screen door hit you in butt on the way out, guys.

Re:First they attack it (3, Interesting)

Bazouel (105242) | more than 6 years ago | (#24758441)

From a comment made about the article:

You [the articles authors] seem to be under the impression that MapReduce is a database. It's merely a mechanism for using lots of machines to process very large data sets. You seem to be arguing that MapReduce would be better (for some value of better) if it were a data warehouse product along the lines of TeraData. Unfortunately the resulting tool would be less effective as a general purpose mechanism for processing very large data sets.

Re:First they attack it (1)

shutdown -p now (807394) | more than 6 years ago | (#24775741)

It would only be fair to include the article authors' answer as well:

It's not that we don't understand this viewpoint. We are not claiming that MapReduce is a database system. What we are saying is that like a DBMS + SQL + analysis tools, MapReduce can be and is being used to analyze and perform computations on massive datasets. So we aren't judging apples and oranges. We are judging two approaches to analyzing massive amounts of information, even for less structured information.

Re:First they attack it (1)

Bazouel (105242) | more than 6 years ago | (#24780299)

That answer does not make sense at all given the points they try to make in the article which clearly shows their misunderstanding of what is MapReduce.

What a silly name... (1, Interesting)

Anonymous Coward | more than 6 years ago | (#24756583)

In functional programming map and reduce is very very old knowledge (and, yup, functional programming has its use and, yes, there are some very good and very successful programs written using functional languages).

What's next? A product called DepthFirstSearch (notice the uber broken camel case for a product name) that has nothing to do with the depth-first search algorithm?

Google? Allo?

Um. (1)

Estanislao Martnez (203477) | more than 6 years ago | (#24756585)

Doesn't Oracle have this sort of feature already, without the Google "MapReduce" buzzword buzz?

Re:Um. (2, Informative)

EvilIntelligence (1339913) | more than 6 years ago | (#24756653)

Yes, its called hash partitioning. Been around since version 7 or 8 about 10 years ago (current release is 11).

Re:Um. (1, Informative)

Anonymous Coward | more than 6 years ago | (#24756757)

Uh, no. MapReduce is a parallel programming model -- not a way of laying out data on disk.

Re:Um. (2, Informative)

raddan (519638) | more than 6 years ago | (#24757231)

Actually, the two are paired: programming model and implementation. The reason there's a programming model is that functional methods allow Google's implementation to automatically parallelize the input data for feeding to the cluster. So the implementation is very important, because that's actually how the data is processed and returned.

In that sense, Oracle's clustering optimizations are also a paired programming model and implementation, since, presumably, you need to know Oracle's SQL language extensions in order to take advantage of them (disclaimer: I don't use Oracle). From what I understand about functional programming, SQL should be ideally positioned to take advantage of these kinds of optimizations, since the actual implementation details of any SQL query are always left to the query optimizer, SQL being a declarative language. I'm going to speculate wildly and say that you could probably write a SQL interpreter using a functional style as well, and that good ones probably already do.

Re:Um. (1)

Estanislao Martnez (203477) | more than 6 years ago | (#24759035)

IIRC, Oracle has features for parallelizing query execution automatically for queries. These features are enabled by various combinations of session settings and query hints, and can parallelize execution either within a single server machine, or across multiple machines in a cluster.

I'm going to speculate wildly and say that you could probably write a SQL interpreter using a functional style as well, and that good ones probably already do.

It's deeper than that. Save for relvar update operations, relational algebra [wikipedia.org] just is a functional language. Relational algebra really just consists of relation types and higher-order functional operators over them. For example, relational restriction is an operator that takes a relation over a tuple type and a predicate over that tuple type, and returns another relation over the same tuple type.

Re:Um. (1)

raddan (519638) | more than 6 years ago | (#24759783)

Yeah, that's why I speculated that SQL might be done so easily-- Oracle really is a rabbit hole. I've done some relational algebra in a database course (and also was exposed to set theory in my discrete maths course), but it was unclear to me whether query optimizers actually broke a query down into relational algebra or not. In fact, I remember that despite having had prior experience with SQL, relational algebra was much easier for me to wrap my head around than SQL. My professor was hesitant to go too much into optimizers, though, since most of that is implementation-dependent. He thought it was much more important to talk about ACID, so that's what we spent a good deal of time on.

Re:Um. (1)

EvilIntelligence (1339913) | more than 6 years ago | (#24768753)

The optimizer in an Oracle database (and others, I'm sure) actually determines "access path" based on resource cost. It automatically generates many different access paths, and based on known statistics about the underlying objects in question, determines the cost in resources to execute that path (CPU, memory, disk I/O, etc, etc), then chooses the one with the least cost. It's not always correct 100% of the time, but you can influence the optimizer through configuration parameters at the database level as well as "hints" in the SQL statement itself as specially coded comments. Oracle supports parallel inserts/updates/deletes across multiple partitions, as well as parallel reads.

Whether you use partitioning in a relational database vs data sharding across multiple machines will depend on what you intend to do with that data. If you only plan to simply use a given value to do a lookup (is the word "car" in that page?), then sharding may be the way to go, since it easily creates a wide and flat surface to lay out your data for quick lookup. If you plan on joining that data or doing any kind of complex analysis, then a relational database is the way to go. So it all still comes back to business requirements for the system.

Re:Um. (1)

Estanislao Martnez (203477) | more than 6 years ago | (#24770835)

The optimizer in an Oracle database (and others, I'm sure) actually determines "access path" based on resource cost. It automatically generates many different access paths, and based on known statistics about the underlying objects in question, determines the cost in resources to execute that path (CPU, memory, disk I/O, etc, etc), then chooses the one with the least cost.

Leaving aside the issue of where the "query rewriter" ends and where the "optimizer" starts, no, that's not all that happens to go from SQL query against a database to execution plan. Access paths are things like choosing various types of index access or table scans. However, many optimizations to SQL statements are purely syntactic, and are based on semantic equivalences guaranteed by the relational algebra.

An esay example: a restriction that applies to just one of the relations in a join can be pushed inside the join onto that relation. For example, if you have restrict(join(A,B), predicate_over_A), you can transform that relational algebra expression into join(restrict(A, predicate_over_A), B). This optimization is called "pushing the restriction," and it reduces the number of rows that have to be processed for the join.

So, database query optimization, deep down, involves reasoning both about equivalent query transformations and hardware resource costs for various operations.

Re:Um. (0)

Anonymous Coward | more than 6 years ago | (#24781875)

yes. Oracle has more than one way to skin this cat

- user defined aggregates (since 8i I think)

- Table functions . Oracle's table functions can run in parallel and take more than one way to partition the input. It is fairly easy to simulate map reduce using them

Google's map-reduce is so powerful (IMHO) not because of the programming paradigm but because Google built a distributed fault tolerant data store (GFS) and the environment (a cluster manager?) to manage 1000's of processes on 100's to 1000's of machines..

Low Quality Paper (0)

Anonymous Coward | more than 6 years ago | (#24756599)

The original paper for map reduce, http://labs.google.com/papers/mapreduce-osdi04.pdf is actually of pretty poor quality.

There are not really any useful comparisons in the paper. They do not indicate how it scales with increases in the number of processors, so while it may be very fast on the mammoth amount of hardware used, how much faster would it actually get on additional hardware.

If you look at the Sort section of the comparison they seem to be comparing to http://www.almaden.ibm.com/cs/gpfs-spsort.html
which is a 10% improvement on wildly improved hardware, which would seem to be rather disappointing results. This would not have been a problem with the paper had there been any mention of this, but there was not.

Again Bjarne got it right (1, Insightful)

Anonymous Coward | more than 6 years ago | (#24756879)

I am with Bjarne on this one.
Bjarne Stroustrup, creator of the C++ programming language, claims that C++ is experiencing a revival and
that there is a backlash against newer programming languages such as Java and C#. "C++ is bigger than ever.
There are more than three million C++ programmers. Everywhere I look there has been an uprising
- more and more projects are using C++. A lot of teaching was going to Java, but more are teaching C++ again.
There has been a backlash.", said Stroustrup.

He continues.. ..What would the world be like without Google?... Only C++ can allow you to create applications as powerful as MapReduce which allows them to create fast searches.

I totally agree. If Java ( or Pyhton etc. for that matter ) were fast enough why did Google choose C++ to build their insanely fast search engine. MapReduce rocks.. No Java solution can even come close.
I rest my case.

Got what right? (3, Interesting)

argent (18001) | more than 6 years ago | (#24757053)

I don't think you can credit Bjarne with "compiled code is faster than interpreted code" (or the 21st century version: "compilers can perform better optimizations that JIT translators").

C++ happens to be the most popular fully compiled language, having edged Fortran out of that position some time near the end of the last century.

Back in the early '80s, when he was coming up with C++, the big Fortran savants were saying stuff like "Fortran is bigger than ever. There are more than X million Fortran programmers. Everywhere I look there has been an uprising... a lot of teaching was going to Pascal, but more are teaching Fortran again. There has been a backlash."

----

And that's not the only thing C++ has in common with Fortran, either.

Re:Got what right? (3, Interesting)

johanatan (1159309) | more than 6 years ago | (#24757711)

" (or the 21st century version: "compilers can perform better optimizations that JIT translators").

Actually, JITters can do some optimizations that compilers can't--by splitting the compilation into a frontend and a backend. The front end is essentially just a parser, and the later the back-end compile happens, the more opportunities for optimizations actually open up (including such things as utilizing specific instruction sets for given architectures and fine tuning the compile based on run time statistics).

See the LLVM for more info: http://llvm.org/ [llvm.org]

(or .NET for that matter--but we're anti-MS around here. :-)

Re:Got what right? (1)

argent (18001) | more than 6 years ago | (#24763681)

including such things as utilizing specific instruction sets for given architectures and fine tuning the compile based on run time statistics

1. That's a nice theory but in practice JIT implementations of interpreters are not actually anywhere near as fast as compilers for real world workloads.

2. When performance is critical (or even if you only THINK it's critical, see "Gentoo Linux"), compilers can use the same techniques, and still take advantage of the better regional and global optimizations they can do... see Intel's compiler for the IA64 architectures for an extreme example.

3. Improvements in local optimization are nice, but unless you're running on something like Itanium regional optimizations trump local ones. And if you are, regional optimizations STILL trump local ones.

4. Finally, when you're REALLY up against a wall, there's JIT recompilation.

Re:Again Bjarne got it right (1)

johanatan (1159309) | more than 6 years ago | (#24757189)

You are aware that Python has built in support for map and reduce, no? And that the Python interpreter and most JVMs are written in C++ (not to mention many operating systems). When did the implementation language ever prove the abstraction worthwhile?

Re:Again Bjarne got it right (1)

Lucas.Langa (922843) | more than 6 years ago | (#24757331)

Python is written in C, actually.

Re:Again Bjarne got it right (3, Insightful)

johanatan (1159309) | more than 6 years ago | (#24757629)

To most people, C++ is C. :-) Unfortunate but true.

Re:Again Bjarne got it right (1)

smellotron (1039250) | more than 6 years ago | (#24759123)

Stop embracing the ignorance.

Re:Again Bjarne got it right (1)

johanatan (1159309) | more than 6 years ago | (#24764231)

Oh, I don't embrace it! In fact, I don't care to ever use C (proper) and I certainly never intend to use C++ as if it were C (that's actually my biggest gripe with C++ currently as recent co-workers do not always agree that high-level design is good and the language [and apparently sound arguments] do nothing to convince them of that).

But, my original point still stands if you substitute 'C' for 'C++'. Heck, I could've even mentioned assembly if we really want to talk perf. Everyone knows that hand-tuned assembly beats everything else, no? But, the point of MapReduce is to provide a high-level abstraction for massive parallelization. And, in fact, it is something that you'd get for free if using a functional language like Haskell or the built-in map and reduce of Python (and, there's quite a bit of Python at Google if I am not mistaken).

In short: the language of the implementation says nothing of the validity of the abstraction. Yes, C++ is the fastest language, but there are times when even it is not fast enough and assembly must be hand-tuned.

Re:Again Bjarne got it right (1)

apathy maybe (922212) | more than 6 years ago | (#24765827)

To me, C is basically a subset of C++ (and I am well aware that C came first, and that it is exactly a subset).

That is, if I can program in C, I can do C++ as well, and if I can do C++, I can use many of the techniques when programming C.

Of course, I can't program either C or C++ (Java and PHP are the closest I've got).

So, your original comment the "Python interpreter and most JVMs are written in C++" is correct, if you understand C as being a subset of C++. But actually, you are wrong when it comes down the nitty gritty. And you should have said C originally, if you knew that was what Python was actually written in.

Re:Again Bjarne got it right (1)

johanatan (1159309) | more than 6 years ago | (#24774263)

It was a slip. I am and was aware that Python is written in C (though I fail to see why really). C++ can do everything C can and better. And, I disagree with the statement about C programmers being able to program C++. That is just not true. C++ is a multi-paradigm language and C is essentially only a single paradigm--namely, procedural. It is exactly C++'s support for the [obsolete] procedural/structured methodology that would [mis]-lead a C programmer into thinking that they know C++.

Re:Again Bjarne got it right (1)

johanatan (1159309) | more than 6 years ago | (#24774331)

And, one other minor point-- C is not exactly a subset of C++. Ever since C99 brought about new features to C (the specific details of which I do not recall) which C++ does not support (and possibly even before then), they have diverged. It is true though that C is essentially a subset of C++.

Re:Again Bjarne got it right (1)

apathy maybe (922212) | more than 6 years ago | (#24780353)

I meant to say "not exactly", damn brain running ahead of myself again...

It makes more sense if you automatically insert the "not" that I inadvertently missed.

(I seem to be doing it quite often as well, forgetting my negatives...)

Re:Again Bjarne got it right (1)

Rakishi (759894) | more than 6 years ago | (#24770837)

Well technically the most popular (and fastest I believe) implementation of python is written in C but python itself doesn't need to be written in C. There is a Java implementation, a python implementation, a .net implementation and probably a few others.

Re:Again Bjarne got it right (1, Informative)

Anonymous Coward | more than 6 years ago | (#24757195)

Don't confuse the search engine with MapReduce. The MapReduce engine creates the indexes for the search engine, its a batch job processor. Just because google chose C++ does not mean it is the only choice, even if it was the best choice for them. Hadoop (a java project at Yahoo, and open source too) has a MapReduce implementation.

Re:Again Bjarne got it right (4, Interesting)

samkass (174571) | more than 6 years ago | (#24757425)

If Java ( or Pyhton etc. for that matter ) were fast enough why did Google choose C++ to build their insanely fast search engine.

Because their developers knew it better? Because it had better 64-bit support when they started it? Because full GC's weren't compatible with their use case and IBM's parallel GC VM hadn't been released yet? Because they could get and modify all the source to all the libraries?

I don't know the answer, but there are a lot of possibilities besides speed. You're jumping to an awfully big conclusion there, Mr. Coward.

Re:Again Bjarne got it right (0)

Anonymous Coward | more than 6 years ago | (#24762171)

Because compiled languages with a frugal library use _far_ less memory than the VM + JIT compiler + All-included class library?

When the amount of data to process is insanely big, the most important optimizations include reducing run-time space and execution time. Pseudo-compiled garbage-collected languages fail big on both optimizations. Those are better at optimizing development time, though.

As always, you have to use the right tool for the job.

Re:Again Bjarne got it right (0)

Anonymous Coward | more than 6 years ago | (#24770943)

Uhhhm you're contradicting yourself. When your data set is 20gb you really don't care if the program has an overhead of 20mb or 200mb. In other words loading all the extra VM and library stuff is inconsequential. Now if you had said memory efficiency (or garbage collection overhead) that would be a different point but you didn't.

CPU usage is probably not the bottleneck in many of these cases since data reading alone is a huge bottleneck. Large amount of data processing is NOT equivalent to large amounts of computation. Protein folding may have a couple kilobytes of data but still require a super computer (and still fail). Corporate data aggregation may have a couple terabytes of data but require little more than a Pentium 1 if the statistics being computed are simple enough.

Also in terms of execution time Java is close to C/C++ in most cases or at least close enough that it probably doesn't matter much.

Re:Again Bjarne got it right (1)

Jack9 (11421) | more than 6 years ago | (#24757575)

Only C++ can allow you to create applications as powerful as MapReduce which allows them to create fast searches.

Except that MapReduce is not an application, that it was originally codified in LISP, and that Google started using the technology because they bought AltaVista, where it was originally used for searching.

An AC getting it all wrong? Unpossible.

Re:Again Bjarne got it right (1)

adpowers (153922) | more than 6 years ago | (#24759085)

Except that AltaVista was bought by Overture [wikipedia.org] who were then bought by Yahoo!. Also, I wouldn't really call MapReduce a technology. The individual functions (Map and Reduce) come from functional programming, but the concept is becoming popular because Google's implementation and Hadoop have made it easy to write large scale data processing applications without having to worry about scaling or failures yourself. It also doesn't hurt that many problems can be solved with MapReduce.

A five digit user getting it all wrong? Unpossible.

Re:Again Bjarne got it right (1)

Jack9 (11421) | more than 6 years ago | (#24760005)

The technology is not just mapreduce, it's how you manage multiple resources to leverage what is essentially brute force. Now, try to keep up, someone can buy the shell of a company after another buys the heart:

http://arnoldit.com/wordpress/2008/01/18/map-reduce-the-great-database-controversy/ [arnoldit.com]

Hey look, we're both guilty of not being perfect. Thanks for the vote of confidence though!

Re:Again Bjarne got it right (3, Informative)

Rakishi (759894) | more than 6 years ago | (#24758603)

Well someone should tell that to the people working on Hadoop. I'm sure they'd love to know that their java mapreduce based framework is impossible. Maybe they'll even be able to use the paradox to built a perpetual motion machine and power the world.

See: http://developers.slashdot.org/comments.pl?sid=900359&cid=24756761 [slashdot.org]

Re:Again Bjarne got it right (1, Informative)

Anonymous Coward | more than 6 years ago | (#24760111)

Hadoop is written in Java and does a fine job. and google uses more java than you can imagine.

Simply alternative to Map/Reduce (0)

Anonymous Coward | more than 6 years ago | (#24756991)

The Map/Confuse [youtube.com] algorithm.

Kurt Monash has been debunked already (-1, Troll)

Anonymous Coward | more than 6 years ago | (#24757381)

There is almost nothing worthy in the Google technology from the perspective of general purpose databases (relational) and especially nothing worthy in the Kurt Monash article for anyone having at least some basic understanding of database technologies.

http://www.google.ca/search?hl=en&q=debunkings+kurt+monash&meta=

wrong argument? (2, Insightful)

fragbait (209346) | more than 6 years ago | (#24759017)

Though this post is my introduction to both MapReduce and the argument, it strikes me that the people arguing are arguing the wrong problem.

While MapReduce might be used against some structured data, it looks to be something for unstructured data and dynamically inventing structures in unstructured data. Additionally, you might want to keep that new structure around for a while. You might want to load it up with terabytes of data. At the same time, this data is less and less useful over time.

Think about two of the key pieces of data Google has, web pages and user interaction and preference data. Web pages change over time. Web sites come and go. Some change a lot (news sites) and some change very little.

There is a LOT of user interaction data. Clicks on pages, javascript that fires to doubleclick, etc. With preferences, that changes over time, too. Also, marketers want to dynamically react to the clicks and even the minute change of a preference that generates a buck.

With such a large, changing, and time sensitive dataset, how could it be structured into something as relatively static as a schema? You would box yourself in by making it a schema and defining all the possible relationships.

So, you take it up one abstraction level and make a "schema" for making relationships. Further more, there is a narrow window within which you even care about data and how it is structured. Granted, you want the webpage/site data to stick around for queries. But even that is marginally useful. Think about how many pages you go into a query on google? I'm sure that will vary by person, but I'd also bet that in practice it is pretty small.

Maybe everyone else gets that and I'm just late to the party. But my point is that the wrong argument is being made that this should follow all the RDBMS work that has come to date.

Sure, I do agree that they shouldn't completely ignore all of the research, but to suggest it has to have a schema, indices, etc. just comes across as arguing all data problems belong in a traditional database.

Or maybe I can take a different approach to this....my brain doesn't have an index. It does categorize data and it can categorize the same piece of data in multiple ways. As I learn new things, my brain creates new "indices" of sort. A large portion of the data in my brain is time sensitive, or indexed over time. The older I get, the more the details of the minutia of life (what I had for dinner this evening) isn't important any more and it loses its categorization. I don't have a schema for my brain, rather I have multiple and I invent and dissolve them over time. I don't know what new one I'll need in the future. I can't know that and without that, I can't make a schema for it. I also can't be constantly modifying the same schema in place. It is easier for me to invent a new one as I go and just abandon the old ones. Sure, new schemas will have parts of the old, but it is still a new schema with the old one still in place and referencing the same data that the new one will soon reference.

-fragbait

Functional Programming (0, Offtopic)

KliX (164895) | more than 6 years ago | (#24759705)

How many of you familiar with functional programming just *cringe* when you see how badly basic math is discussed in the programming mainstream?

just add Protocol Buffers (1)

vrmlguy (120854) | more than 6 years ago | (#24761735)

Anyone remember this story: http://tech.slashdot.org/tech/08/07/08/201245.shtml [slashdot.org] ? According to Google:

Protocol buffers are now Google's lingua franca for data -- at time of writing, there are 48,162 different message types defined in the Google code tree across 12,183 .proto files. They're used both in RPC systems and for persistent storage of data in a variety of storage systems.

(See http://code.google.com/apis/protocolbuffers/docs/overview.html [google.com] .)

If you think about it, Protocol Buffers are just about perfect for MapReduce applications. First, Protocol Buffers data streams are "flat" structures, very similar to database tables. If you need hierarchical data, I think that you'd tend to use multiple tables that incorporate foreign keys, rather than embedding the hierarchy every time it's referenced (as XML does).

Second, and again unlike XML, the data serialization is described via a .proto file, which can itself be serialized in exactly the same way as the data stream. It looks fairly easy to write a "Map" or a "Reduce" program that works with any Protocol Buffers data stream.

I suspect that this, rather than SQL compatibility, is the road to success with MapReduce processes.

Re:just add Protocol Buffers (1)

Prof.Phreak (584152) | more than 6 years ago | (#24763701)

I suspect that this, rather than SQL compatibility, is the road to success with MapReduce processes.

Why not both? :-)

A lot of distributed databases already implicitly support functionality that's equivalent to mapreduce, especially greenplumb and netezza.

ie: map operation is just:

create table output as
select [cols] from [table] where [condition] distribute on (key1,key2,key3);

Which will scan the table stored on all nodes, and deposit the data across all the nodes in netezza distributed on key1,key2,key3---ie: implicit ``map''.

One can then apply aggregate functions to do a ``reduce'' (possibly group by key1,key2,key3?)

The upshot? It's a lot more flexible in SQL than pretty much any weird language structure I've seen.

1970's style hype meets 2000's style hype (1)

speedtux (1307149) | more than 6 years ago | (#24761989)

Stonebraker isn't exactly the one to complain about this: just as MapReduce is being overhyped these days, relational databases were being overhyped in the 1970's, and he rode that wave all the way to fame and fortune. 30 years later, although every database system in the world calls itself "relational", very few database applications actually are relational.

MapReduce is indeed a simple, decades-old parallel programming technique. It's not the be-all-and-end-all of parallel programming, but it's good for solving a lot of real-world problems with minimum fuss and hassle.

Between the relational database hype of yore and today's MapReduce hype, give me the MapReduce hype any day. Relational database hype was all about pseudo-mathematical formality and ad hoc formalisms. MapReduce is at least about simple, working, real-world programming techniques. The sooner we get rid of Stonbraker's approach to computer science, the better off we will all be.

Good luck with transactions and map/reduce (1)

clint999 (1277046) | more than 6 years ago | (#24765133)

and can I run Linux on it? Or it on Linux? Is it available for my iPhone?

Wow... So few know about MapReduce? (1)

ZerdZerd (1250080) | more than 6 years ago | (#24768641)

I'm astounded that so few people here know about MapReduce. There are lots of good videos about it made by Google.
There's a five-part lecture about it starting here [youtube.com] (use this link [google.com] to view the rest)

Or simply search for "google mapreduce". I suggest watching one of the videos though :)

Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?