Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Google Running 900,000 Servers

CmdrTaco posted more than 3 years ago | from the yeah-but-how-many-cores-chris dept.

Google 127

1sockchuck writes "How many servers is Google using? The company won't say, but a new report places the number at about 900,000. The estimate is based on data Google shared with researcher Jonathan Koomey, for a new report on data center power use. The data updates a 2007 report to Congress, and includes a surprise: data centers are using less energy than projected, largely due to the impact of the recession (buying fewer servers) and virtualization."

cancel ×

127 comments

Sorry! There are no comments related to the filter you selected.

FUCK Google (-1)

Anonymous Coward | more than 3 years ago | (#36947658)

FuCk fUcK GoOgLe and their spies

Re:FUCK Google (1, Informative)

Lysander7 (2085382) | more than 3 years ago | (#36947736)

I believe you dropped your tin foil hat.

Re:FUCK Google (0)

obergfellja (947995) | more than 3 years ago | (#36947750)

here is the viagra... you'll need it if you are going to FuCk GoOgLe amount. lol

Re:FUCK Google (0)

thetoadwarrior (1268702) | more than 3 years ago | (#36949558)

You'll need to try harder, Ballmer.

Nice (-1)

Anonymous Coward | more than 3 years ago | (#36947660)

Nice.

Firts Post! (-1)

Anonymous Coward | more than 3 years ago | (#36947672)

First Post

640k (0, Funny)

Anonymous Coward | more than 3 years ago | (#36947684)

640k ought to be enough for anybody.

How about lower wattage CPUs? (4, Informative)

Enry (630) | more than 3 years ago | (#36947686)

We've moved from 1U systems with 90-125W systems to blade enclosures with 60W CPUs and also getting 4 or 6 cores per physical CPU rather than 1 or 2. While our HPC cluster core count has increased by a factor of 4 (allowing researchers to do more work), the amount of energy and floor space required did not increase that much at all.

Re:How about lower wattage CPUs? (1)

Skapare (16644) | more than 3 years ago | (#36947738)

ARM for even lower wattage.

Re:How about lower wattage CPUs? (0)

Anonymous Coward | more than 3 years ago | (#36947836)

But longer wait-times for it to finish crunching numbers ;-)

Re:How about lower wattage CPUs? (2)

alen (225700) | more than 3 years ago | (#36947856)

and ARM is gimped in that it doesn't support a lot of features x86/x64 does that makes it run so fast compared to ARM

biggest advantage of ARM is that the SoC includes the GPU and the RAM. this is going away. x64 is now shipping with GPU on board for most CPUs.

Re:How about lower wattage CPUs? (1)

poetmatt (793785) | more than 3 years ago | (#36948014)

uh, I believe we're talking about server situations, not consumers. Having a GPU on an ARM chip or on an X64/x86 chip is a nonsequitur. Or did I miss something here? I fail to see where you come up with this shit considering even Intel is trying to make an ARM chip. [tomshardware.com] . You think they're doing it because supposedly arm doesn't run as well or about having GPU's on the chip? Hint: Intel is shitting their pants over ARM right now.

Re:How about lower wattage CPUs? (1)

tibit (1762298) | more than 3 years ago | (#36948160)

They should only worry about enterprise/server market when there's a full featured JVM for ARM. So far, there isn't one.

Re:How about lower wattage CPUs? (2)

grimmjeeper (2301232) | more than 3 years ago | (#36948300)

GPUs provide substantially faster floating point processing than a general purpose CPU. Putting these new Intel/AMD integrated chips in big iron supercomputers will give research teams orders of magnitude more computing power (for less power and money) than the current CPU-only based offerings.

As far as Intel trying to make an ARM chip, that's for an entirely different market.

Re:How about lower wattage CPUs? (1)

poetmatt (793785) | more than 2 years ago | (#36951648)

putting an integrated graphics chip onto a server cpu is a joke. Even if you put 32 integrated graphics cards onto a single 8 core server cpu, the flops will be shit in comparison to any discreet graphics card which costs orders of magnitude less.

Re:How about lower wattage CPUs? (1)

Desler (1608317) | more than 3 years ago | (#36949030)

You do realize that a lot of supercomputing clusters use GPUs for their number crunching, right? So how exactly is it a non sequitur?

Re:How about lower wattage CPUs? (1)

poetmatt (793785) | more than 2 years ago | (#36951630)

GPU clusters and GPU's embedded into processors are not the same. Desktops might have iGPU's but server processors don't. Servers that do GPU clusters use discreet graphics cards, commonly. Why do people associate the two? Do people not really understand that on an 8 processor chip if you have a single integrated GPU (let alone 1 per core) its performance is still going to be shit in comparison to even a cheap discreet graphics card?

Re:How about lower wattage CPUs? (0)

Anonymous Coward | more than 2 years ago | (#36951574)

Your link is to what originated as an unsubstantiated trade rag filler piece from well over a year ago that had no real backing from Intel, and I believe there was some countering piece even written. Intel wouldn't have to "try" to make ARM chips. They made ARM chips for years and sold the business to Marvell voluntarily. Some people claimed they were realizing their "mistake"... but there is no good evidence that Intel is changing their original strategy.

Notice that well over a year has passed and I have yet to hear of the follow-up to your bullshit link despite this nugget: "Wednesday that servers based on ARM multicore processors should arrive within the next twelve months."

Just like IBM and BlueGene and PPC cores, there will be a niche for ARM in the server arena, but I don't think Intel is shitting anything over it, and you need to find something more exciting than EETimes references from last year.

Re:How about lower wattage CPUs? (1)

elsurexiste (1758620) | more than 3 years ago | (#36947752)

While our HPC cluster core count has increased by a factor of 4 (allowing researchers to do more work), the amount of energy and floor space required did not increase that much at all.

How much did it increase, then? Just curious about the efficiency...

Re:How about lower wattage CPUs? (2)

Enry (630) | more than 3 years ago | (#36947928)

Hard to say. We were already moving to blade servers when we started the expansion. With the previous chassis servers with 90W CPUs, we'd have to get a rack rated at 30KW rather than the standard 20. With the low power CPUs, we can easily fit in a 20KW rack. Our data center folk (who really know the numbers) started to panic when we had a 20KW rack 1/2 full of 1U systems.

Re:How about lower wattage CPUs? (3, Informative)

Enderandrew (866215) | more than 3 years ago | (#36948032)

In addition to this, Google runs DC power supplies, with a low-voltage on board battery as opposed to large rack UPS. I've heard they have some innovative tricks for server room cooling as well, but I've never seen confirmation of exactly what they're doing. But Google goes to great lengths to cut down data center power usage.

Lower Wattage: Google may be test-driving Tilera (2)

1sockchuck (826398) | more than 3 years ago | (#36948260)

There are reports that Google has been testing servers [semiaccurate.com] using low-power many-core servers from Tilera and Quanta. Facebook is also test-driving Tilera chips [datacenterknowledge.com] and seeing promising results when using them on key-value pair apps like memcached. When you have 900,000 servers, you get plenty of attention from processor and server vendors.

Re:Lower Wattage: Google may be test-driving Tiler (1)

Enry (630) | more than 3 years ago | (#36948418)

Not all 900k servers are being used for memcached. You will need higher speed CPUs for crawling, operating all the back-end Google services, transcription, etc.

Re:Lower Wattage: Google may be test-driving Tiler (1)

JamesP (688957) | more than 3 years ago | (#36950672)

I'm guessing for IO bound servers (that is, all those that take care of storage) the use of a fast CPU is a waste (unless they're also running MapReduce)

Since most modern CPUs can 'go around the world' while the HD is fetching data, kind of makes sense.

Of course, the cost/benefit analysis is not only this.

my opinion (-1)

Anonymous Coward | more than 3 years ago | (#36947740)

dubz get!

900,000 servers... (1)

FalafelXXX (598968) | more than 3 years ago | (#36947782)

900,000 servers, and they are all data-mining the internet for porn. Awesome.

Re:900,000 servers... (1)

trum4n (982031) | more than 3 years ago | (#36947948)

Yet Google Images sucks harder than ever.

Re:900,000 servers... (1)

jcwayne (995747) | more than 3 years ago | (#36948202)

That's the idea.

Re:900,000 servers... (0)

Anonymous Coward | more than 3 years ago | (#36947962)

That's what the Internet is for, after all.

Re:900,000 servers... (2)

Hatta (162192) | more than 3 years ago | (#36948810)

What?! 900,000?! There is no way that could be right.

Re:900,000 servers... (1)

gorzek (647352) | more than 3 years ago | (#36949370)

Did you expect it to be...

...OVER 900,000?

garbage (2)

StripedCow (776465) | more than 3 years ago | (#36947788)

With the current pace of technology, those machines will be outdated in a few years.

Imagine the pile of garbage that will create...

Re:garbage (1)

jedidiah (1196) | more than 3 years ago | (#36947932)

Send it to the local PC recycler.

We have one of those in rural Texas. Surely there are some in the same neck of the woods where Google operates.

Re:garbage (2)

Pieroxy (222434) | more than 3 years ago | (#36948120)

Send it to the local PC recycler.

Should we send the 900,000 units in one shipment?

Re:garbage (1)

BetaDays (2355424) | more than 3 years ago | (#36949032)

Sure, why not? It's cheaper to send in bulk.

Re:garbage (1)

Pharmboy (216950) | more than 3 years ago | (#36949334)

Depends. Did they replace all 900,000 in one day? No? Didn't think so.

Re:garbage (1)

cpghost (719344) | more than 3 years ago | (#36950038)

Send it to the local PC recycler.

It's actually a shame that perfectly working machines are being destroyed this way... while in Asia, Africa etc. people (schools e.g.) would be more than happy to use those machines for 5 to 10 more years, at least.

Re:garbage (1)

TheRaven64 (641858) | more than 3 years ago | (#36950330)

Schools in the USA and Europe would be grateful for them too! Businesses often throw out machines as part of a 3-year rolling upgrade cycle, while schools are stuck with machines 5+ years old because they don't have budget for new ones.

Re:garbage (0)

Anonymous Coward | more than 3 years ago | (#36950400)

Do you know what a google server looks like ( hint no case, rack mounted device, possibly running on pure DC)? I doubt a google server would be of much use to any consumer. I suppose they could use it for a file server or what not if it took AC, and they built a cover for it.

Re:garbage (1)

georgesdev (1987622) | more than 2 years ago | (#36951384)

sure, but what do citizens like you and me do about it? close to nothing I'm afraid. Isn't there about a billion pcs operating world-wide today? what do we do to recycle a few hundred million of them each year?

Let me fix that for you (1)

bigredradio (631970) | more than 3 years ago | (#36947934)

With the current pace of technology, those machines will be outdated in a few months .

Imagine the pile of garbage that will create...

Re:Let me fix that for you (1)

jdgeorge (18767) | more than 3 years ago | (#36948516)

Maybe more to the point, they'll be unable to fulfill their mission for Google within a few years. They'll be good enough for a while, though.

Re:Let me fix that for you (1)

abarrow (117740) | more than 3 years ago | (#36950504)

So, why is that, exactly? Google has proven by it's actions that their solution to the need for more processor power is just to add more servers. Granted, it'd be tough to pull one of the old blades and play the most recent edition of Duke Nukem, but really, these systems will still be able to crunch numbers for a very long time. Would buying faster systems with more cores would allow a single system to crunch more? Sure, but really, those old systems can still happily serve their original purpose. As Google needs more systems, they buy the latest and fastest, but the old systems don't actually have to be replaced until they actually break.

Re:garbage (1)

captainpanic (1173915) | more than 3 years ago | (#36947980)

I expect quite a high percentage of that to be recycled, actually.

Re:garbage (1)

franciscohs (1003004) | more than 3 years ago | (#36948718)

Not sure were you work, but in all places I've worked hardware was stockpiled once it was put out of service. I'm not so sure about the reason, but there seems to be a lot of accounting issues for a company if it wants to get rid of stuff. Maybe someone more knowledgeable can comment on that.

Re:garbage (1)

the_B0fh (208483) | more than 3 years ago | (#36950610)

depreciation. Also it's an "asset" and an "asset" can't just go missing.

Re:garbage (0)

Anonymous Coward | more than 3 years ago | (#36948038)

With the current pace of technology, those machines will be outdated in a few years.

Imagine the pile of garbage that will create...

One would hope they're upgradable. Throw out a few CPUs, maybe some memory, keep the perfectly good boards, chassis and PSUs.

Re:garbage (1)

GodfatherofSoul (174979) | more than 3 years ago | (#36948126)

No, people like us will buy them for our home networks. Low-end and low-income users will happily use them. Old machines have *a lot* of life, maybe not on the front lines anymore.

Re:garbage (1)

onepoint (301486) | more than 3 years ago | (#36948344)

I was able to recycle a dell inspiron 8500 back in 2005. Just died on Friday, 6 years working well, the drive finally crashed. I figured I would fix it and give it to a non profit that might be able to use it.

Re:garbage (2)

swb (14022) | more than 3 years ago | (#36948514)

A lot of non-profits won't take donated systems anymore because it's a nuisance to deal with so many antiquated systems. Non-profits need working and reasonably contemporary systems to do their work, a bunch of 256 meg Win 98 systems is really more of an insult than a benefit.

Re:garbage (1)

onepoint (301486) | more than 3 years ago | (#36949302)

Really? then i guess I'll fix it up for myself and use it as a travel computer. Thank you for advising me.

Re:garbage (1)

Desler (1608317) | more than 3 years ago | (#36949096)

Low-end and low-income users will happily use them.

No, they would more likely rather having some recent that actually works well rather than some 12 year old dumpster dived computer that runs as slow as molasses.

Re:garbage (1)

LWATCDR (28044) | more than 3 years ago | (#36948156)

Well that will depend on a lot of things. If a socket upgrade is available they can just put in a new CPU. Even if the "server" is taken out of service you are just talking about the mainboard, CPU, and memory. The CPU and memory might be offered for sale as "used" or "refurbished". The board will be recycled, the power supply and rest will probably be reused unless a more efficient solution is available. Servers tend to have a longer life than say cell phones, desktops, and PCs.
 

Re:garbage (2)

Sitrix (973862) | more than 3 years ago | (#36948470)

I am quite sure that not all of it is what we call "physical" servers. It's most likely a cluster of beefy hardware running a ton of VM's. As that hardware becomes obsolete, engineers will run less VM's on it and later move it out of main production environment to handle less stressful tasks. It's common now, seeing several servers (48Cores, 512GB ram) running a few hundred virtual servers. So it will take a long time before that hardware will be completely thrown away...

The standard Google server (3, Informative)

Quila (201335) | more than 3 years ago | (#36949838)

Dual-processor, two SATA hard drives, 12V PSU, 12V Lithium battery. It's not even sealed in a case, just a frame holding a board, with the PSU, battery and hard drives held on with Velcro.

Most of these will be about that spec.

Typo (0)

Anonymous Coward | more than 3 years ago | (#36947794)

data centers are using less energy than projected

Re:Typo (0)

Anonymous Coward | more than 3 years ago | (#36948236)

You think the editors would actually proofread and edit the summary before posting it? Hahahahahahahahahaha.

Impressed (3, Funny)

Verdatum (1257828) | more than 3 years ago | (#36947806)

No comment about it being over 9000 yet. I'm impressed Slashdot.

Re:Impressed (1)

saihung (19097) | more than 3 years ago | (#36947824)

You absolute bastard.

Re:Impressed (1)

Baloroth (2370816) | more than 3 years ago | (#36948004)

I'm not as impressed with that as I am with the fact that no one ha yet pointed out

data centers are using less energy that projected

. "Data centers that projected what?", you might ask, as I did. Nope, its a typo. Or an omission, since data centers certainly could project... but more likely a type.

Re:Impressed (1)

Hooya (518216) | more than 3 years ago | (#36948140)

"but more likely a type" what?, you might ask, as I did. Nope, it's a typo. Or an omission, since you could have a type of grammatical error... but more likely a typo.

Re:Impressed (0)

Anonymous Coward | more than 3 years ago | (#36948198)

... but more likely a type.

I'm not sure if you were expressly being ironic with this, or you just happened to make a typo while complaining about typos and it's normally ironic.

...Or maybe it's just coincidental, like rain on your wedding day-eee-ay.

Re:Impressed (0)

Anonymous Coward | more than 3 years ago | (#36948182)

Please, this is slashdot.
 
Btw, IT'S OVER 9000x10^2!!!1!eleven1!

Re:Impressed (1)

Nidi62 (1525137) | more than 3 years ago | (#36948312)

Don't worry, eventually there will be at least 9000 comments all with some variation of "But that's over 9000!"

Including this one

Is that a Googleplex? (1)

Kreylix (322480) | more than 3 years ago | (#36947818)

Couldn't resist.

Mainframes (2)

instagib (879544) | more than 3 years ago | (#36947854)

I wonder how a few hundred mainframes plus storage arrays would fare in terms of TCO.

Shameless plug (1)

ArhcAngel (247594) | more than 3 years ago | (#36947884)

OT Google is using all kinds of renewable sources for their energy. [nexteraene...ources.com]
Back OT Do you think they keep all their servers in mobile homes so they can keep the number of servers a secret?

Re:Shameless plug (2)

Nidi62 (1525137) | more than 3 years ago | (#36948130)

Seems like they always keep what's in their locations a secret. My father was a manager at a distribution center for a fairly large national electrical supply chain, and several times people would come in to buy things for a complex they were building nearby. Apparently they worked for Google (they were always wearing Google shirts) and they were never allowed to tell them what they were building or what kind of work they were going to be doing.

The Last Question (2)

roman_mir (125474) | more than 3 years ago | (#36947924)

So does it have enough data to answer the last question meaningfully yet?

Re:The Last Question (1)

Meneth (872868) | more than 3 years ago | (#36947990)

No. There is still more data to collect.

Re:The Last Question (0)

Anonymous Coward | more than 3 years ago | (#36948342)

"There is as yet insufficient data for a meaningful answer"

Re:The Last Question (0)

Anonymous Coward | more than 3 years ago | (#36948848)

Imagine having a rack full of 1U servers, each with 5 small diameter, high speed fans pushing air across the CPU/RAM/HDDs (5*42=210). Now, take that times 30 racks per row, and how many rows per room... Since you're standing between two long rows, you've got quite a bit of (semi)white-noise always hitting your eardrums.

At the DC I worked in, it wasn't as dense/full as Google's, but you still had a slight ringing in your ears after being in there for so long. When we'd come out to the office area afterwards, I'd describe it as a static-sound in my ears rather than ringing and everything seemed especially loud/sharp sounding for a while.

Plus, having over-the-ear ear protection will help keep the ears warm! Also a big plus in some datacenters.

Dan

Wolfram (0)

Anonymous Coward | more than 3 years ago | (#36949230)

Maybe Wolfram has more servers? They seem to be working on the problem anyway... Can entropy be reversed? [wolframalpha.com] I asked google, but they just directed me to some dumb sci/fi story.

Re:The Last Question (1)

MC68040 (462186) | more than 3 years ago | (#36950266)

Well.. the answer is 42, we already knew that :)

Re:The Last Question (1)

TheRaven64 (641858) | more than 3 years ago | (#36950442)

For not knowing the difference between the Last Question and the Ultimate Question, your geek card is hereby revoked.

Re:The Last Question (1)

AdmiralXyz (1378985) | more than 2 years ago | (#36951356)

Yes [google.com]

Re:The Last Question (1)

roman_mir (125474) | more than 2 years ago | (#36951580)

I agree with this guy. [slashdot.org] You are done.

Real Question (0)

Anonymous Coward | more than 3 years ago | (#36947964)

The real question is how the hell do you manage that many servers? How do you even name them, let alone manage their allocations for IP addressing, maintenance, kernel patches, etc.? They must have some seriously complex management software that allows them various level views into this ginormous server farm in order to deal with it. Once you have that base level of managing the servers themselves, how do you allocate your workloads across that many machines? Shit, no wonder they always introduce a new feature and say things like, "this will be rolling out to all users over the next week". They don't know where the hell they need to replicate the new bits to and how long it will take.

Re:Real Question (0)

Anonymous Coward | more than 3 years ago | (#36948058)

That just might be the reason why they have a small army working in Operations and are always hiring more people...

Re:Real Question (1)

Enderandrew (866215) | more than 3 years ago | (#36948084)

I'm guessing only front-facing web servers get constantly regular security patches. The rest might not get rebooted or patched at all if they replace the servers frequently enough (2-3 years). We are talking Linux servers here.

With that many servers, I'd tie the naming scheme to rack location. IP addressing would go in order along those racks.

Re:Real Question (1)

instagib (879544) | more than 3 years ago | (#36948086)

Bash scripts.

Re:Real Question (4, Funny)

mat catastrophe (105256) | more than 3 years ago | (#36948214)

The real question is how the hell do you manage that many servers? How do you even name them

1hahaha
2hahaha
3hahaha
4hahaha ....
899999hahaha
900000hahaha

900000 servers! Hahahaha!

Re:Real Question (0)

onepoint (301486) | more than 3 years ago | (#36948374)

some how I am thinking of the Muppet's and that guy the "count"

Re:Real Question (1)

mat catastrophe (105256) | more than 3 years ago | (#36948462)

Well, you should be.

Re:Real Question (3, Interesting)

tibit (1762298) | more than 3 years ago | (#36948224)

I don't think that's that big of a problem, once you plan for having that many from the get go. All of those servers must be automatically provisioned, and their names are irrelevant and are machine generated. No one ever needs to know those names. Their management software probably manages servers by function. Say they have so many storage nodes, so many storage indexers, so many load balancers, so many static content servers, so many web spiders, etc. The configurations for any particular server must be generated, too, from some sort of a global configuration for their whole "system".

Re:Real Question (1)

alen (225700) | more than 3 years ago | (#36948284)

the applications do all the work and everything is redundant

i've read years ago that if a server at google goes down it may take a month for the data center ops people to get around to replacing it

And, how many do the Feds have logins on? (1)

GodfatherofSoul (174979) | more than 3 years ago | (#36948102)

n/t

and soon there will be (0)

Anonymous Coward | more than 3 years ago | (#36948170)

over 900,000!

Noisy Room? (1)

seven of five (578993) | more than 3 years ago | (#36948440)

Noticed that the gentleman in the picture is wearing fullsize earphones or ear protection. Is the room that noisy, or is he just enjoying some tunes?

Re:Noisy Room? (1)

Bengie (1121981) | more than 3 years ago | (#36949148)

Google had a YouTube video of their security practices. They do actually have hearing protection for the server rooms.

And in their downtime... (0)

Anonymous Coward | more than 3 years ago | (#36948526)

I hear that Google is totally mining Bitcoins.

Virtualization saves energy? (1)

timeOday (582209) | more than 3 years ago | (#36948554)

Virtualization is very inefficient compared to simply running multiple server processes on a single box, because each VM allocates resources to an instance of the OS, and RAM is more-or-less statically allocated beetween them. This makes sense when running several different services that each require a different operating environment, or to enforce complete user separation, e.g. a hosting service. But I would imagine google is running tens of thousands of identical servers running the same server daemon, so why would Virtualization make sense and save energy there?

Re:Virtualization saves energy? (1)

CBravo (35450) | more than 3 years ago | (#36949924)

Young padawan, there are many sorts of virtualization.

Re:Virtualization saves energy? (1)

dmpot (1708950) | more than 3 years ago | (#36950028)

I would imagine google is running tens of thousands of identical servers running the same server daemon, so why would Virtualization make sense and save energy there?

Who said that Google uses virtualization to run identical servers?

Just running "git log --grep=virtualization" on the Linux kernel, you can see that Google does not contributed much to virtualization in the Linux lernel, in sharp contrast to other part of the kernel such as ext4.

Re:Virtualization saves energy? (0)

Anonymous Coward | more than 3 years ago | (#36950378)

They're probably running a custom, purpose built, abstraction layer that provides the advantages of virtualization without the drawbacks you think of when you think of virtualization. I promise they're isolating the crap out of their daemon processes so that they can roll them out in stages without affecting other services, and they're able to pop the services up on another server if a datacenter goes offline unexpectedly.

Can you imagine... (0)

Anonymous Coward | more than 3 years ago | (#36949932)

A beowulf cluster of those?

Can you imagine a (1)

future assassin (639396) | more than 3 years ago | (#36950162)

beowulf cluster of these?

Wasteful (0)

Anonymous Coward | more than 2 years ago | (#36951696)

640K should be enough for anybody

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>