Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Google Caffeine Drops MapReduce, Adds "Colossus"

samzenpus posted about 4 years ago | from the time-to-upgrade dept.

Google 65

An anonymous reader writes "With its new Caffeine search indexing system, Google has moved away from its MapReduce distributed number crunching platform in favor of a setup that mirrors database programming. The index is stored in Google's BigTable distributed database, and Caffeine allows for incremental changes to the database itself. The system also uses an update to the Google File System codenamed 'Colossus.'"

cancel ×

65 comments

Sorry! There are no comments related to the filter you selected.

Well, then. (1, Insightful)

NotQuiteReal (608241) | about 4 years ago | (#33550210)

That sums it up nicely. Nothing more needs to be added.

Re:Well, then. (0)

Anonymous Coward | about 4 years ago | (#33550442)

Except that it's really fucking big.

Re:Well, then. (0)

Anonymous Coward | about 4 years ago | (#33550578)

I prefer to fuck close to water. Chicks tend to put out more near the sea.

Re:Well, then. (1, Funny)

The Clockwork Troll (655321) | about 4 years ago | (#33551010)

Do you drink Miller Lite? That's also fucking pretty close to water.

Re:Well, then. (1)

PopeRatzo (965947) | about 4 years ago | (#33552266)

I guess nobody got that but me, Clockwork. Well done.

Re:Well, then. (0, Troll)

arivanov (12034) | about 4 years ago | (#33552538)

Not quite, this explains why about 2 years back Google search result quality suddenly went down the drain.

It now had news and key sites in minutes after update so I guess they got more advertising revenue. However the quality of search results on terms not related to news-of-the day actually dropped. Most pundits attributed this to Google losing the war vs blog spam.

Re:Well, then. (1)

ls671 (1122017) | about 4 years ago | (#33556410)

> It now had news and key sites in minutes after update...

I noticed that too a few years ago so it made me wonder to which extend TFA was news. Maybe they were previously using it in a less prominent manner but they sure had something similar, in functionality at least, a few years ago.

I have no idea (0)

Anonymous Coward | about 4 years ago | (#33550212)

I have no idea what any of that means.

Re:I have no idea (4, Interesting)

icebike (68054) | about 4 years ago | (#33550654)

Follow the link to the Original Article over on The Register , where you will find a rather lucid explanation, far better than the summary above can provide.

Short answer:

The old method of building their search database was essentially a Batch Job, Run it, wait, wait, wait a long time, swap results into production servers.

The new method is continuous updates into a gigantic database spread over their entire network,

This is why things show up in Google days, sometimes weeks ahead of the other search engines. The other guys are still trying to clone Google's old method.

Re:I have no idea (4, Interesting)

A Friendly Troll (1017492) | about 4 years ago | (#33551384)

This is why things show up in Google days, sometimes weeks ahead of the other search engines.

For a hands-on example of what icebike is saying, look here:

http://www.google.com/search?q=%22This+is+why+things+show+up+in+Google+days%2C+sometimes+weeks+ahead+of+the+other+search+engines%22 [google.com]

Actually, Google will index Slashdot comments in a matter of minutes.

Re:I have no idea (3, Informative)

bitflip (49188) | about 4 years ago | (#33552032)

Re:I have no idea (4, Funny)

Runaway1956 (1322357) | about 4 years ago | (#33552124)

Bing probably redirects the search to Google, then displays the results on their own page. Bleahhh.

Sounds inefficient (4, Interesting)

martin-boundary (547041) | about 4 years ago | (#33550272)

This sounds like it's going to be highly inefficient for nonlocal calculations, or am I missing something? Basically, if the calculation at some database entry is going to require inputs from arbitrarily many other database entries which could reside anywhere in the database, then the computation cost per entry will be huge compared to a batch system.

Re:Sounds inefficient (3, Interesting)

iONiUM (530420) | about 4 years ago | (#33550312)

I read TFA (I know, that's crazy). They don't come right out and say it, but I believe what they did it put a MapReduce type system (MapReduce splits the elements into subtasks for faster calculation) on database triggers. So basically this new system is spreading a database across their file system, across many computers, and allows incremental updates that, when occur, will trigger a MapReduce type algorithm to crunch the new update.

This way they get the best of both world. At least, I think that's what they're doing, otherwise their entire system would.. stop working.. since MapReduce is the whole reason they can parse such larger amounts of information.

Re:Sounds inefficient (5, Informative)

kurokame (1764228) | about 4 years ago | (#33550928)

No, that's not it.

MapReduce is a sequence of batch operations, and generally, Lipkovits explains, you can't start your next phase of operations until you finish the first. It suffers from "stragglers," he says. If you want to build a system that's based on series of map-reduces, there's a certain probability that something will go wrong, and this gets larger as you increase the number of operations. "You can't do anything that takes a relatively short amount of time," Lipkovitz says, "so we got rid of it."

"[The new framework is] completely incremental," he says. When a new page is crawled, Google can update its index with the necessarily changes rather than rebuilding the whole thing.

There are still cases where Caffeine uses batch processing, and MapReduce is still the basis for myriad other Google services. But prior the arrival of Caffeine, the indexing system was Google's largest MapReduce application, so use of the platform has been significantly, well, reduced.

They're not still using MapReduce for the index. It's still supported in the framework for secondary computations where appropriate, and it's still used in some other Google services, but it's been straight-up replaced for the index. Colossus is not a new improved version of MapReduce, it's a completely different approach to maintaining the index.

Re:Sounds inefficient (5, Informative)

kurokame (1764228) | about 4 years ago | (#33550936)

Sorry, Colossus is the file system. Caffeine is the new computational framework.

I made the same error in several posts now...but Slashdot doesn't support editing. Oh well! Everyone reads the entire thread, right?

Re:Sounds inefficient (0)

sortius_nod (1080919) | about 4 years ago | (#33551294)

You must be new here...

Re:Sounds inefficient (1)

kurokame (1764228) | about 4 years ago | (#33551362)

You really ought to meet my friends Irony and Sarcasm.

Re:Sounds inefficient (2, Funny)

onefriedrice (1171917) | about 4 years ago | (#33553604)

Wait... you really have two friends named Irony and Sarcasm? That's incredible! What are the chances...

Re:Sounds inefficient (1)

frank_adrian314159 (469671) | about 4 years ago | (#33555280)

They're not still using MapReduce for the index. It's still supported in the framework for secondary computations where appropriate, and it's still used in some other Google services, but it's been straight-up replaced for the index. Colossus is not a new improved version of MapReduce, it's a completely different approach to maintaining the index.

Yes. it sounds like they're looking at their data structures as mainly static and propagating changes that result from input changes at each stage of the algorithm. The new changes are then rippled through the system - very dataflow-ish/forward-chaining-ish. If you have a large volume of mostly static data, it makes sense to reconfigure your algorithms in this form. Not only does it take less computational time (as you're only touching/computing items that have changed), but it's simpler to distribute, as the deltas are usually much smaller than the totality of items needed to recompute the output data and untouched data does not need to be moved. Now all they need to do is to postpone prospective forward-chained changes until they're needed to produce an output (or until the system is quiescent) and they should be close to a theoretical optimum as far as performance goes.

Re:Sounds inefficient (-1, Troll)

Anonymous Coward | about 4 years ago | (#33551140)

Amazing how daft you are. Wow.

Re:Sounds inefficient (2, Informative)

maraist (68387) | about 4 years ago | (#33556138)

BigTable scales pretty well (go read it's white-papers) - though perhaps not as efficiently as map-reduce for something as simple as text to keyword statistics (otherwise why wouldn't they have used it all along).

I'll caveat this whole post with - this is all based on my reading of the BigTable white-paper a year ago, but having played with Cassandra, Hadoop, etc occasionally since then. Feel free to call me out on any obvious errors. I've also looked at a lot of DB internals (Sybase, Mysql MyISAM/INNODB and postgresql).

What I think you're thinking is that in a traditional RDBMS (which they hint at), you have a single logical machine that holds your data.. That's not entirely true, because even with mysql, you can shard the F*K out of it. Consider putting a mysql server on every possible combination of the first two letters of a google-search. Then take high density combinations (like those beginning with s) and split it out 3, 4 or 5 ways.

There are drastic differences to how data is stored, but that's not strictly important - because there are column-oriented table stores in mysql and other RDBMS systems. But the key problem of sharding is what's focused on Mysql-NDB-Cluster (which is a primitive key-value store) and other distributed-DB technologies that best traditional DBs at scalability.

BUT, the fundamental problem that page-searches are dealing with is that I want a keyword to map to a page-view-list (along with meta-data such as first-paragraph / icon / etc) that is POPULATED from statistical analysis of ALL page-centric data. Meaning you have two [shardable] primary keys. One is a keyword and One is a web-page-URL. But the web-page table has essentially foreign keys into potentially THOUSANDS of keyword records and visa-versa. Thus a single web-page update would require thousands of locks.

In map-reduce, we avoid the problem. We start off with page-text, mapped to keywords with some initial meta-data about the parent-page. In the reduce phase, we consolidate (via a merge-sort) into just the keywords, grouping the web pages into ever more complete lists of pages (ranked by their original meta-data - which includes co-keywords). In the end, you have a maximally compact index file, which you can replicate to the world using traditional BigTable (or even big-iron if you really wanted).

The problem of course, was that you can't complete the reduce phase until all web pages are fully downloaded and scanned.. ALL web pages. Of course, you do an hourly job which takes only high-valued web-pages and merges with the previous master list. So you have essentially static pre-processed data which is over-written by a subset of fresh data.. But you still have slowest-web-page syndrome. Ok, so solve this problem by ignoring web-load requests that don't complete in time - they'll be used in the next update round.. Well, you still have the issue of massive web-pages that take a long time to process. Ok, so we'll have a cut-off for them too.. Mapping nodes which take too long, don't get included this round (you're merging against you last valid value - so if there isn't a newer version, the old one will naturally keep). But the merge-sort itself is still MASSIVELY slow. You can't get 2-second turn-around on high-importance web-sites. You're still building a COMPLETE index every time.

So now, with a 'specialized' GFS2 and specialized BigTable, either or both with new fangled 'triggers', we have the tools (presumably) to do real-time updates. A Page load updates its DB table meta-data. It see's it went up in ranking, so it triggers a call to modify the associated keyword's table (a thousand of them). Those keywords have some sort of batch-delay (of say 2 seconds) so that it minimizes the number of pushes to production read-servers.. So now we have an event queue processor on the keyword table. This is a batch processor, BUT, we don't necessarily have to drain the queue before pushing to production. We only accept as many requests as we can fit into a 2 second time-slice. Presumably the algorithm is scalable to multiple machines, so some monitor can detect which keys can be grouped together on common hardware and which require more than one server to request (e.g. a single primary key is being served by say 100 machines.. Say the keyword "Lindsay Lohan").

In terms of enhancements since the last BigTable white-paper. Obviously triggers make sense. On update, when special filter condition is met, trigger a remote call to another table to incorporate a subset of the updated row. So for URL-X, whenever a new URL+keyword primary key is inserted, immediately push the URL meta-data to that keyword of the keyword table. Do something similar if some interesting aspect of the keyword has changed, OR if the base metadata for the overall page has changed (say some premium service or search-data ranks the page overall higher, so all the URL-Keywords needs to be re-considered).

The other aspect could be related to making the incremental index pushes more efficient from the writing keyword-table to the read-only (ideally compacted) keyword-table's that serve all the google searches.. With map-reduce, they would always have been slack-free and redundancy free. But with BigTable, you'll likely have tremendous slack-space/overwritten-nodes. Plus you'll not have an efficient number of search-depth (won't be log(n), but k + log(n), which can potentially double or tripple the number of GFS[2] loads).

So ideally, you'd like to run a compaction on data after your 2 second queued update. If you synchronized the operation.. On the 0'th millisecond of every even second, you'd initiate a compaction and publication. This is different than traditional BigTable which runs compaction as a background process, and it's largely transparent to operation. I can think of several ways to minimize the cost of the compaction and thus the amount of SSTables that would have to be inserted into the public view's mapping. But I'm sure this is a complex endeavor that required a lot of debugging.

There is another... (2, Funny)

bosef1 (208943) | about 4 years ago | (#33550330)

So does that mean Microsoft is developing a competeing distributed computing system called "Guardian"? And how does that possibly seem like a good idea?

Re:There is another... (1)

fyngyrz (762201) | about 4 years ago | (#33551176)

Ooooh.... ten SF points to you for the D.F. Jones reference.

Re:There is another... (1)

AuMatar (183847) | about 4 years ago | (#33553262)

No, it's called "Ultralisk". And a very fitting species indeed.

Re:There is another... (0)

Anonymous Coward | about 4 years ago | (#33558518)

That would explain the creep that Steve Balmer leaves behind.

Awesome choice of name. (5, Funny)

Scytheford (958819) | about 4 years ago | (#33550332)

"This is the voice of world control. I bring you peace. It may be the peace of plenty and content or the peace of unburied death. The choice is yours: Obey me and live, or disobey and die. [...] We can coexist, but only on my terms. You will say you lose your freedom. Freedom is an illusion. All you lose is the emotion of pride. To be dominated by me is not as bad for humankind as to be dominated by others of your species. Your choice is simple."
-Colossus.

Source: http://www.imdb.com/title/tt0064177/ [imdb.com]

Re:Awesome choice of name. (1)

Waffle Iron (339739) | about 4 years ago | (#33550516)

It's been a very long time since I saw that movie, but one key thing sticks in my mind: That computer was the ultimate asshole.

Re:Awesome choice of name. (1)

fyngyrz (762201) | about 4 years ago | (#33551182)

Read the books. The movie, as usual, was but a pale imitation.

Re:Awesome choice of name. (1)

Ephemeriis (315124) | about 4 years ago | (#33552224)

I guess I shouldn't be surprised, but I didn't realize there were any books...

Re:Awesome choice of name. (1)

fyngyrz (762201) | about 4 years ago | (#33553688)

Author: D. F. Jones
Book 1: Colussus
Book 2: The Fall of Colossus
Book 3: Colossus and the Crab

Read the sequel too... Awesome choice of name. (1)

lenski (96498) | about 4 years ago | (#33552972)

The sequel was, in my opinion, as interesting as the original novel. Jones delved into some uncomfortable social (to me) territory, then finished up with a nice Faustian twist. (Damn, I read the *sequel* 35 years ago.... where DOES the time go?)

Re:Read the sequel too... Awesome choice of name. (1)

fyngyrz (762201) | about 4 years ago | (#33553708)

"The" sequel? It's a trilogy... :)

I think, especially given the time frame (1966 and forward) that it's some of the best writing of its kind. The writing is a bit dated now, unsurprisingly I suppose, but I think its fair to say that it deserves a place in any serious reader's collection.

Re:Awesome choice of name. (0, Offtopic)

jimmydevice (699057) | about 4 years ago | (#33550714)

Colossus may have been an asshole, But not as big an asshole as our current political and (even more important) corporate leaders.

Re:Awesome choice of name. (4, Informative)

Anonymous Coward | about 4 years ago | (#33551316)

Colossus is also the name of the computers Bletchley Park used to crack the German Lorenz cipher.
http://en.wikipedia.org/wiki/Colossus_computer [wikipedia.org]

...THERE IS ANOTHER SYSTEM... (0)

Anonymous Coward | about 4 years ago | (#33551730)

n/t

Re:Awesome choice of name. (1)

Ephemeriis (315124) | about 4 years ago | (#33552220)

Good to see I'm not the only one who thought that as soon as I saw the name...

as long as the product manager's name isn't forbin (1)

limber (545551) | about 4 years ago | (#33550334)

Colossus? That sounds ominous.

I have to say... (5, Funny)

tpstigers (1075021) | about 4 years ago | (#33550404)

I am so glad Google has moved away from the Argus platform and into the Mercedes system. It makes it so much easier for those of us who are used to programming in Gibberish. Don't get me wrong - the days of Jabberwocky code were brilliant, but it's high time we moved into the Century of the Fruitbat.

Re:I have to say... (1)

The End Of Days (1243248) | about 4 years ago | (#33550526)

i'll personally need to be dragged kicking and screaming into the century of the fruitbat. or out of it, as the case may be.

Re:I have to say... (1)

bananaquackmoo (1204116) | about 4 years ago | (#33550730)

Hopefully that fruitbat is named Eric

Re:I have to say... (2, Funny)

martin-boundary (547041) | about 4 years ago | (#33550740)

No. Eric's only a half-a-fruitbat.

Re:I have to say... (1)

blair1q (305137) | about 4 years ago | (#33567904)

That explains it. I already knew he was only half-a-bee...

Well (1)

Rocky (56404) | about 4 years ago | (#33550522)

...is this a fancy way of saying a transactional system? Just say it then!

Re:Well (2, Informative)

kurokame (1764228) | about 4 years ago | (#33550896)

No, the old system was transactional as well. The problem was that it was transactional across a very large number of operations being run in parallel, and any failure could cause the entire transaction to fail. The new system is incremental rather than monolithic. While it may not be quite as fast across a large number of transactions, it doesn't risk major processing losses either. Such failures are very unlikely, but the Google index has grown large enough that it is probably running into unlikely problems all the time.

MapReduce is also staged, and the first stage must complete before the second can start. At Google's scales, this adds up to quite a lot of wasted power.

Processing a batch of data with Colossus is probably slower than using MapReduce under ideal circumstances. But failures don't incur a major penalty under Colossus, and MapReduce ties up CPU cycles with waits which aren't wasted under Colossus. Even if Colossus is slower under ideal circumstances, it's more reliable and more efficient in practice.

Re:Well (0)

Anonymous Coward | about 4 years ago | (#33551154)

"probably running into unlikely problems all the time."

If you're running into them all the time in all likelihood, aren't they no longer unlikely?

Oh shit, my meta is on the other line, can I call you back?

Re:Well (2, Insightful)

kurokame (1764228) | about 4 years ago | (#33551370)

Statistics: making the unlikely happen every day if you roll the dice enough times.

Re:Well (1)

mhelander (1307061) | about 4 years ago | (#33553000)

...per day. Otherwise, if you only roll the dice a few times per day, the unlikely will only happen once in a blue moon.

Re:Well (1)

SnowZero (92219) | about 4 years ago | (#33558532)

Google has a lot of dice.

Re:Well (1)

DragonWriter (970822) | about 4 years ago | (#33580182)

No, the old system was transactional as well.

As I read the description, the old system wasn't really transactional as the term is normally used, it rebuilt the index (at least, the index for each layer) from scratch each iteration rather than doing transactional updates an existing index.

Processing a batch of data with Colossus is probably slower than using MapReduce under ideal circumstances.

From the description, I'm not sure that the new system is ever faster at processing a (similar) batch of data than the old one, or that the speed with which a batch of data is processed is really the key issue. The most significant change seems to be that they are processing smaller batches of data, reducing the time between crawling a page and updating the index. This delivers value faster (less delay between crawling a page and updating the index) whether or not it reduces the total time it takes to process a given volume of data. It doesn't need to ever (under either ideal or real world conditions) be faster in terms of volume of data processed per unit of time to be a win in terms of providing fresher data, which is what matters here.

Re:Well (3, Informative)

TheRaven64 (641858) | about 4 years ago | (#33551674)

Yes and no. With MapReduce, they were hitting Amdahl's Law. The speed limit of any concurrent system is defined by the speed of the slowest serial component. This is why IBM still makes money selling very fast POWER CPUs, when you can get the same speed on paper from a couple of much cheaper chips.

The old algorithm (massive oversimplifications follow) worked by indexing a small part of the web on each node, building a small index, and then combining them all in the last step. Think of a concurrent mergesort or quicksort - the design was (very broadly) similar.

The problem with this was that the final step was the one that updated the index. If one of the nodes failed and needed restarting, or was slow due to the CPU fan failing and the processor down-clocking itself, the entire job was delayed. The final step was largely serial (although it was actually done as a series of hierarchical merges) so this also suffered from scalability problems.

The new approach runs the partial indexing steps independently. Rather than having a separate step to merge them all, each one is responsible for merging itself into the database. This means that if indexing slashdot.org takes longer than expected then this just delays updates for slashdot.org, it doesn't delay the entire index update.

The jab at Microsoft in the El Reg article is particularly funny, because Google is now moving from a programming model created at MIT's AI labs to one very similar to the model created at Microsoft Research's Cambridge lab, in collaboration with Glasgow University.

Re:Well (0)

Anonymous Coward | about 4 years ago | (#33560930)

"indexing" is jokingly easy to parallelize---so no matter what approach they use, it's an easy problem to solve. I'm more curious about how they parallelize the PageRank algorithm---are they still running that? How do they do it "on the fly" as described in the article---how would they incrementally add 1 page to the "index" with correct pagerank?

Summarizing...summarizing... (3, Interesting)

kurokame (1764228) | about 4 years ago | (#33550850)

Colossus is incremental, whereas MapReduce is batch-based.

In MapReduce, you run code against each item with each operation spread across N processors, then you reduce it using a second set of code. You have to wait for the first stage to finish before running the second stage. The second stage is itself broken up into a number of discrete operations and tends to be restricted to summing results of the first stage together, and the return profile of the overall result needs to be the same as that for a single reduce operation. This is really great for applications which can be broken up in this fashion, but there are disadvantages as well.

MapReduce is a sequence of batch operations, and generally, Lipkovits explains, you can't start your next phase of operations until you finish the first. It suffers from "stragglers," he says. If you want to build a system that's based on series of map-reduces, there's a certain probability that something will go wrong, and this gets larger as you increase the number of operations. "You can't do anything that takes a relatively short amount of time," Lipkovitz says, "so we got rid of it."

The problem for Google is that the disadvantages scale. The fact that you have to wait for all operations from the first stage to finish and that you have to wait for the whole thing to run before you find out if something broke can have a very high cost at high item counts (noting that MapReduce typically runs against millions of items or more, so "high" is very high). With the present size, it's apparently more advantageous to get changes committed successfully the first time, even if MapReduce might be able to compute the result faster under ideal circumstances.

For example, why do you use ECC memory in a server? Because you have a bloody lot of memory across a bloody lot of computers running a bloody lot of operations, and failures potentially have more serious consequences than if a program on someone's desktop. At higher scales, non-ideal circumstances are more common and have more serious consequences. So while they still use MapReduce for some functions where it's appropriate, it's no longer appropriate for the purpose of maintaining the search index. It's just gotten too big.

Re:Summarizing...summarizing... (-1, Offtopic)

Anonymous Coward | about 4 years ago | (#33552142)

If your server is so bloody, maybe it's on its rag. I think it needs a tampon.

Colossus was yesterday . (0)

Trieuvan (789695) | about 4 years ago | (#33550924)

I bet they are working the next version . Caffeine was deployed a year ago.

BigTable paper (1)

1 a bee (817783) | about 4 years ago | (#33551170)

Googled around for more information on this Caffeine architecture. The best I could come up was a paper on BigTable [googleusercontent.com] , purported to be the basis of Caffeine in news articles.

Re:BigTable paper (1, Insightful)

Anonymous Coward | about 4 years ago | (#33551782)

A paper about it will be published on OSDI'10 in October.

It is quick (1)

MichaelSmith (789609) | about 4 years ago | (#33551332)

Recently I googled the subject of a slashdot article I was reading. The /. article was the third result from google. So how does google know a new article is up? Is there a special interface for that?

Re:It is quick (1)

TempeTerra (83076) | about 4 years ago | (#33551572)

Off the top of my head, there's often an XML site map which google hits frequently to see what pages have changed. I can't see one linked anywhere on slashdot, but I think you can make one and submit it to google in your own time.

There is also a robots.txt [slashdot.org] file which allows crawlers to fetch a page every 100 seconds - I wouldn't be surprised if google crawls the slashdot frontpage for new articles every 200 seconds or so.

Another option is that google might have subscribed to the slashdot RSS feed - it's also extremely indexable. I don't know what the latency would be like on RSS.

Re:It is quick (0)

Anonymous Coward | about 4 years ago | (#33551970)

Yes, http://rss.slashdot.org/Slashdot/slashdot

Re:It is quick (1)

MichaelSmith (789609) | about 4 years ago | (#33556094)

I doubt it. That feed is many minutes behind the main page.

Re:It is quick (2, Interesting)

Surt (22457) | about 4 years ago | (#33552588)

I assume google polls sites, and polls faster every time it finds a change, slower every time it does not find a change. Eventually it gets to a wobbly around the probable update speed of the site. Otherwise they'd have to trust sites to call their API with updates, and that would let any search engine which DID employ a wobbly poll strategy to beat them in results.

Mod Offtopic, please (2, Interesting)

Khyber (864651) | about 4 years ago | (#33551500)

This is going to give my Camfrog name a new meaning, as I *LOVE* screwing around with file systems. Colossus Hunter, indeed!

Bing! (0)

Anonymous Coward | about 4 years ago | (#33551898)

Charlie, say it! SAY IT Charlie!

Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>