Beta

Slashdot: News for Nerds

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

cancel ×

132 comments

Going to be expensive! (5, Funny)

ikarys (865465) | more than 3 years ago | (#35477390)

It'll likely cost an ARM and a leg.

Cheaper way (1)

eclectro (227083) | more than 3 years ago | (#35477408)

Have a beowulf cluster of cell phones.

Re:Cheaper way (0)

Grindalf (1089511) | more than 3 years ago | (#35477602)

I was going to post that! Darn ...

Re:Cheaper way (1)

Anonymous Coward | more than 3 years ago | (#35477902)

the service contracts or ETF charges would cost way more.than the server would.

Re:Cheaper way (4, Funny)

jDeepbeep (913892) | more than 3 years ago | (#35478246)

Nah, too RISCy

Re:Cheaper way (1)

binarylarry (1338699) | more than 3 years ago | (#35478426)

Don't be a CISCy

Re:Cheaper way (0)

Anonymous Coward | more than 3 years ago | (#35479200)

That was not an Inteligent comment.

Re:Cheaper way (1)

binarylarry (1338699) | more than 3 years ago | (#35479356)

Nice.

Re:Cheaper way (0)

Anonymous Coward | more than 3 years ago | (#35479134)

Won't ever be a cluster of cell phones. People who build clusters need floating point performance. X86 floating point performance bites the b*g but the ARM is even worse.

Re:Going to be expensive! (1)

symbolset (646467) | more than 3 years ago | (#35477502)

No.

Re:Going to be expensive! (0)

Anonymous Coward | more than 3 years ago | (#35477518)

I see you've met ARM development kit prices then. The complete opposite of Atmel.

Re:Going to be expensive! (1)

lwsimon (724555) | more than 3 years ago | (#35477644)

Nice. I was thinking "My God... It's full of cores!"

Re:Going to be expensive! (1)

605dave (722736) | more than 3 years ago | (#35478378)

Wish I could mod you up. High-larious.

Re:Going to be expensive! (1)

SimonTheSoundMan (1012395) | more than 3 years ago | (#35477756)

Mmm, reminds me of the prototype card for Acorn computers that had 32, 600MHz ARM processors. They never released an estimated price though. This was back in the early year 2000's so would have been incredibly expensive. Cortex A9's are now in mass production, not in the hundreds/low thousands that Acorn used to make, so might be cheaper than you actually think.

Re:Going to be expensive! (2)

fuzzyfuzzyfungus (1223518) | more than 3 years ago | (#35477842)

I suspect that cost will largely boil down to the "fabric", type unspecified, and whatever the "because we can" premium for this device happens to be.

Since the A9s are in mass production, and have some vendor competition, they should be reasonably cheap, and of basically knowable price; but, depending on what sort of interconnect this thing has, you could end up paying handsomely for that. "Basically ethernet; but cut down to handle short signal paths over known PCBs" shouldn't be too bad; but if it is some sort of custom NUMA unified memory thing, bend over and open your checkbook...

Sounds like my next workstation (0)

Anonymous Coward | more than 3 years ago | (#35477804)

Having a 'server' moniker only means expensive. This thing ought to be on my desktop. Bring it on.

Re:Sounds like my next workstation (0)

Anonymous Coward | more than 3 years ago | (#35477866)

Right now my system doesn't even have 480 live processes on it, let alone ones contending for execution time.

While a lot of heavy lifting certainly can be parallelized into enough threads to fill available space, I'm skeptical that it'll offer performance commensurate with price on a desktop workload. Now a webserver running 480 cgi processes at a time, that could be pretty spectacular...

Re:Sounds like my next workstation (1, Funny)

Jurily (900488) | more than 3 years ago | (#35478618)

Right now my system doesn't even have 480 live processes on it, let alone ones contending for execution time.

You're obviously not running Gentoo.

Re:Going to be expensive! (1)

chrishillman (852550) | more than 3 years ago | (#35477952)

I am dying.. you have killed me. Way too funny for a Monday morning. Now I am at work literally laughing out loud and I can't explain what is funny to anyone who will get it... I am dead inside, killed by your humorous post...

is it worth it? (2)

metalmaster (1005171) | more than 3 years ago | (#35477424)

When you start piling all you can onto a chip the power consumption is going to naturally creep up. Once you reach a certain threshold of x chips you lose on the benefit of ARM being "low-power." Am i wrong?

Re:is it worth it? (3, Insightful)

swalve (1980968) | more than 3 years ago | (#35477442)

Its low power in that the cores (I assume) can be shut down that aren't being used. Like a switchmode power supply versus a linear one. So you are always using the least amount of power possible.

Re:is it worth it? (1)

SlashV (1069110) | more than 3 years ago | (#35477984)

The analogy with a switchmode power supply is completely b0rked. It doesn't contain any cars. (furthermore, switching off cores in a multicore server is complete unlike the 'switching' in a switch mode power supply)

Re:is it worth it? (5, Interesting)

L4t3r4lu5 (1216702) | more than 3 years ago | (#35477444)

Cortex A9 is 250mW per core at 1GHz [wikipedia.org]

You're looking at, for a 240 core 2U node, 60W for CPUs. Pretty impressive.

Re:is it worth it? (1)

arivanov (12034) | more than 3 years ago | (#35477470)

5W average, so let's assume up to 10W per CPU according to the article.

Not bad. In fact good enough to replace completely a commercial non-metered hosted VM offering of the kind memset (http://www.memset.co.uk/) offers at present.

The interesting question here is what is the interconnect between them. After all, who cares that you have 480 cores in 2U if 90% of the time they are twiddling their thumbs waiting for data to be delivered to them.

Re:is it worth it? (0)

Anonymous Coward | more than 3 years ago | (#35477530)

no one said a hosted VM would be cheaper - in fact expect outsourcing to be mroe expensive than your immediate direct costs.

Re:is it worth it? (1)

TheRaven64 (641858) | more than 3 years ago | (#35477726)

TFA said 5W per node, meaning per 4 cores + RAM. That's 600W for the entire system, which is fine for a 2U enclosure.

Aside from the interconnect, the other important question is how much RAM are they going to have? They're using the Cortex A9, not the A15, so they just have a 32-bit physical address space. In theory, this lets them have 4GB of RAM per node (1GB per core), but some of that needs to be used for memory-mapped I/O, so I'd be surprised if they got more than 3GB, maybe only 2GB. That would mean only 512MB per core, which is a little bit tight for a lot of workloads.

Re:is it worth it? (1)

wagnerrp (1305589) | more than 3 years ago | (#35479760)

512MB per core really isn't bad at all, when you consider that core has about the same performance of a 10yr old Pentium 3.

Re:is it worth it? (0)

Anonymous Coward | more than 3 years ago | (#35477474)

Suuuuure and 600W when the server is actually doing something. If it was always idling, why would you have the server?

Re:is it worth it? (1)

somersault (912633) | more than 3 years ago | (#35477508)

A lot of servers are idling for most of the day, but you need them to be able to scale up quickly at certain peak times.

Re:is it worth it? (1)

Sulphur (1548251) | more than 3 years ago | (#35477562)

A lot of servers are idling for most of the day, but you need them to be able to scale up quickly at certain peak times.

Do you mean power up quickly?

Re:is it worth it? (3, Interesting)

somersault (912633) | more than 3 years ago | (#35477684)

Not really, the server could stay powered up the whole time (unless you really get 0% usage at non-peak times, and those times are predictable, in which case it makes sense to just power down completely at those times). By scaling up I mean enabling more cores, thus improving the processing capacity of the server. Then you'd get the best of both worlds, with the server being fine for anything from small to massive workloads, while still using less power than the equivalent x86 setup. Like modern engines which can enable or disable cylinders at will to conserve fuel when not much power is needed.

Re:is it worth it? (1)

oranGoo (961287) | more than 3 years ago | (#35477516)

If(!) ARM is more energy efficient then it delivers more processing power per unit of power. Principle works the same at 250mW and at 600W. It would also generate less heat. Ability to turn the cores on and off is additional benefit that would further improve efficiency.

Re:is it worth it? (1)

Bert64 (520050) | more than 3 years ago | (#35477494)

That is the benefit of arm, the threshold for how many chips you can have is much higher because each individual chip uses less power.

Re:is it worth it? (1)

symbolset (646467) | more than 3 years ago | (#35477514)

Yes, you are wrong.

Re:is it worth it? (4, Interesting)

fuzzyfuzzyfungus (1223518) | more than 3 years ago | (#35477588)

It really depends on how much(and what kind of) support hardware ends up being involved in having lots and lots of them together in some useful way. That and what inefficiencies, if any, are present because your workload was really expecting a smaller number of higher-performance cores.

The power/performance of the core itself remains the same whether you have 1 or 1 million. The power demands of the memory may or may not change: phones and the like usually use a fairly small amount of low-power RAM in a package-on-package stack with the CPU. For server applications, something that takes DIMMS or SODIMMs might be more attractive, because PoP usually limits you in terms of quantity.

The big server-specific questions are going to be the nature of the "fabric" across which 120 nodes in a 2U are communicating. Because 120 ports worth of 10/100 or GigE would occupy 3Us and nonzero power themselves, I'm assuming that this fabric is either not ethernet at all, or some sort of cut-down "we don't need to care about the standards because the signal only has to travel 6 inches over boards we designed, with our hardware at both ends" pseudo-ethernet that looks like an ethernet connection for compatibility purposes; but is electrically more frugal. Whatever that costs, in terms of energy, will have to be added on to the effective energy cost of the CPUs themselves.

Then you get perhaps the most annoying variable: Many tasks are(either fundamentally, or because nobody bothered to program them to support it) basically dependent on access to a single very fast core, or to a modest number of cores with very fast access to one another's memory. For such applications, the performance of 400+ slow cores is going to be way worse than a naive addition of their individual powers would suggest. Sharing time on a fast core is both fundamentally easier, and enjoys a much longer history of development, than does dividing a task among small ones. With some workloads, that will make this box nearly useless(especially if the interconnect is slow and/or doesn't do memory access). For others, performance might be nearly as good as a naive prediction would suggest.

Re:is it worth it? (0)

Anonymous Coward | more than 3 years ago | (#35477812)

no man, for power consumption reasons they should simply use 100Kbit token ring!

Re:is it worth it? (2)

wvmarle (1070040) | more than 3 years ago | (#35478086)

Most servers do not do heavy computing work: they serve up (dynamic) web pages, handle SQL queries, process e-mail, serve files. That sounds to me like lots and lots of threads that each have relatively little work to do.

For example /.: the serving of a single page to a single visitor will take a few dozen SQL queries and the running of a Perl script to stitch it all together. This takes, say, 0.001 seconds of time of an x86 core - a wild guess, may be an order of magnitude off, good enough for the sake of the argument. An ARM core is maybe a tenth of that speed, so that single page would need 0.01 seconds of processing power to build up. And that is assuming the processor is the bottleneck. Likely the network to access the SQL servers is the bottleneck, which may end up the same overall time to build up that web page.

But now there are thousands upon thousands of visitors - all requesting pages. As this all goes parallel, it would simply require ten ARM cores to replace one x86 core and retain the same overall output.

Indeed when you're doing heavy scientific calculations - then ARM definitely won't stand a chance. But web pages won't even need you to do any floating point arithmetic. The same for handling an e-mail queue. It's I/O that's important, the capacity of moving the correct bits from A to B. And from what I've learned about these processors I don't think ARM is doing that so much worse than x86. So depending on the server load, there may really be something to it. Especially as those ten ARM cores use just a fraction of the power of a single x68 core.

Language-imposed gratuitous use of floating point (1)

tepples (727027) | more than 3 years ago | (#35478748)

But web pages won't even need you to do any floating point arithmetic.

Provided your application is written in a language that supports not-floating-point arithmetic. In PHP, for example, any division returns a floating-point result, as does any computation with numbers over 2 billion (such as the UNIX timestamps of dates past 2038).

Re:is it worth it? (0)

Anonymous Coward | more than 3 years ago | (#35478188)

because your workload was really expecting a smaller number of higher-performance cores.

Repeat after me, to the tune of "I'm a lumberjack, and I'm O.K.!":

I'm a workload and I don't care

You may think you've optimized me

but you're a bear.

. . .

Common software (think: LAMP stack) just doesn't use that much CPU per instance. IBM has been capitalizing on this by creating super powerful nodes running multiple virtual machines per node.

Poorly written number crunching nerd-ware will want to stay away from this kind of server and keep pining away for that 150GHz super-core that it has always dreamed of. For an ISP hosting 6000 websites that each get 12 hits a day, this thing is freaking perfect.

Re:is it worth it? (1)

npsimons (32752) | more than 3 years ago | (#35480528)

It really depends on how much(and what kind of) support hardware ends up being involved in having lots and lots of them together in some useful way. That and what inefficiencies, if any, are present because your workload was really expecting a smaller number of higher-performance cores.

I've been saying for years that people should make their chunks of code smaller (eg, smaller functions, et al) so it's easier to understand and maintain. The old argument has always been that the compiler will inline it even if you don't tell it to. I think now, looking towards the future, it's obvious that parallelization will be what drives performance. Code that is already broken down into smaller chunks will scale better to a large number of cores. I guess what I'm trying to say is: break your code down, even beyond what you think is too much; the compiler can inline it for beefier, lower core CPUs, and given the proper backends, automatically thread it to lower power, massively cored architectures. Plus you get the not insignificant bonus of more maintainable code!

Re:is it worth it? (1)

poetmatt (793785) | more than 3 years ago | (#35478698)

There are two arguments for hardware in enterprise. 1: Power to watts ratio. This is substantially more capable than just about anything out there for X86 right now, shy of supercomputers.

So... (0)

Anonymous Coward | more than 3 years ago | (#35477448)

How fast could that CPU brute force a 10-character password encrypted file (assuming decryption success/failure is returned)?

Re:So... (1)

Anonymous Coward | more than 3 years ago | (#35477586)

I think you would have more luck over at ExpertSexchange.

Try titling your post 'Urgent: I password-protected my 1TB porn collection and I forgot my p/w'.

thats how we roll.... (0)

Anonymous Coward | more than 3 years ago | (#35477484)

linux luvs this !X-86 strategy.
open source, thats how we roll...

WANTED: 1U low-power rack server (1)

inflex (123318) | more than 3 years ago | (#35477536)

Right now I'm running an Intel D510 rack server with dual 2.5" drives, it's great, does a lovely job even with it running Ubuntu 10.04 server + VirtualBox ( Ubuntu 8.04 LTS ), however, I'd dearly love to shift over to something even more low-power/compact/SOC, so long as it has SATA, Ethernet, USB and runs a debian-based distro I'd be happy.

Something like a dual-core ARM machine would run ample for the server loads I'm seeing.

So, anyone seen anything like that yet? Or even just a MB in Mini-ITX ?

(btw, why is it that Intel HT enabled still seems to cause random hangs... or maybe it's just coincidental).

Re:WANTED: 1U low-power rack server (1)

Anne Thwacks (531696) | more than 3 years ago | (#35477574)

I want one too (probably three). But I want to run OpenBSD on mine.

Re:WANTED: 1U low-power rack server (0)

Anonymous Coward | more than 3 years ago | (#35477612)

Sounds like you want a Pogoplug Pro or a Sheevaplug. Some of those have SATA ports, all have USB, and all run Linux of some sort.

Re:WANTED: 1U low-power rack server (1)

vlm (69642) | more than 3 years ago | (#35477754)

Hows the dual drive support on the sheeva plug? Looks like the pogo also uses usb as its "drive interface"

Something like a soekris board / case than handles two SATA drives in a RAID mirror would be nice.

The best bet for the original poster is to ask the mythtv guys for low power / fanless options, and stuff it all into a 1U case (assuming rackmount is mandatory)

Re:WANTED: 1U low-power rack server (1)

wagnerrp (1305589) | more than 3 years ago | (#35480044)

The MythTV guys have completely different needs than an underutilized server operator. We have to deal with a very complex scheduler, which if it takes too long to run can cause problems, and with HD video that typically can only be decoded single threaded. Single threaded performance, and a lot of it, is a must, meaning our minimum recommendation is 2.5GHz Core 2 or Athlon II, or better.

That's not to say you can't be low power while you're at it. Tom's Hardware did an article last year where with not considerable effort, they put together a 3.33GHz dual core i5 that idled under 25W. Even better, one of the Mac Mini XServes would idle at less power than your existing Atom. It's always nice to have the headroom available should you want it in the future, and at 25W, it's only going to consume maybe $50 more electricity over a 5yr life than that Atom system.

Re:WANTED: 1U low-power rack server (1)

SuricouRaven (1897204) | more than 3 years ago | (#35477986)

Pogoplugs are toasty. They've been plagued by overheating issues.

Re:WANTED: 1U low-power rack server (2)

TheRaven64 (641858) | more than 3 years ago | (#35477736)

Take a look at the PandaBoard [pandaboard.org] , if you want a low-power, dual-core ARM server, although you'd have to use CF + USB for storage, not SATA. Note, however, that VirtualBox is x86-only. If you want virtualisation, you're currently pretty limited on ARM. There is a Xen port, but it's not really packaged for end users yet.

Re:WANTED: 1U low-power rack server (1)

fnj (64210) | more than 3 years ago | (#35478300)

Why does the spec page omit the single most important spec: power consumption?

Re:WANTED: 1U low-power rack server (1)

aztektum (170569) | more than 3 years ago | (#35481306)

Good luck getting one of those in your hands. My coworker right across the aisle ordered one in January. Still not sure when it will ship.

Re:WANTED: 1U low-power rack server (2)

espiesp (1251084) | more than 3 years ago | (#35477740)

While not in 1U format or a lot of off the shelf NAS boxes use ARM. My LG N2R1 NAS has a 800MHz Marvell 88F6192 and runs Lenny. I won't be surprised to see some NanoITX boards out running similar hardware. Plus, I've been very impressed with how many Debian packages are available for ARMEL. While not perfect, it's the most useful Linux server I've ever had.

Re:WANTED: 1U low-power rack server (1)

inflex (123318) | more than 3 years ago | (#35477770)

That's a good point about the NAS systems, they're comparatively cheap too!

Re:WANTED: 1U low-power rack server (2)

Nursie (632944) | more than 3 years ago | (#35478066)

You need to watch out with them also though. The WD Sharespace I have uses a 500MHz chip which is totally inadequate for decent throughput between the 4-disk array and the GigE interface.

And I had to write my own device support into the kernel to get it running a modern OS! It came with 2.6.12!

Re:WANTED: 1U low-power rack server (1)

inflex (123318) | more than 3 years ago | (#35478282)

Thanks - I've seen some Netgear MS-2000 ones on sale recently for about $130 AUD. and then the RND-2000 for $250.

Meh, maybe I'll just wait for AMD to bring out their "low power" options in Mini-ITX :sigh:

Re:WANTED: 1U low-power rack server (1)

StuartHankins (1020819) | more than 3 years ago | (#35478810)

I bought an RND-2000 and 2 fairly slow 2TB drives (5900 rpm for less noise) since it was to be installed in my bedroom. I got the whole thing shipped with 2 drives for around $430

Software-wise it's fairly nice, with support for Time Machine, AFP, CIFS etc and works great for any single task. But ask it to do more than 1 task and it just doesn't have the horsepower -- for instance copying a large file and trying to play a song causes the song playback to be delayed. If you're using an iPad to stream music or video that also works fine -- unless there's a Time Machine backup going. Then you are delayed; you can't even navigate to different folders from the iDevice. The RISC chip used in the RND-2000 is just soooo slow. Although I can ssh to it (a big plus when the AFP goes nuts and I can no longer delete folders with strange names) and even use rsync on it, it's substantially faster to mount the drive and run rsync from my Mac... this thing is really CPU-bound.

The good news is that while it's copying a file, it gets around 2GB/minute with journaling disabled, jumbo frames turned on, over a GbE network which is pretty good. I know the next model up is around $1000 but I would probably go with the upgrade unless it's truly something you want to use as a single person and don't need simultaneous stuff going on.

Re:WANTED: 1U low-power rack server (1)

Nursie (632944) | more than 3 years ago | (#35480828)

Wow, that is *awesome* compared to the max transfer of around 24MB (bytes at least, not bits) I get out of the sharespace.

That's over vanilla ftp and the processor is max'd at that point. Not the drives or the network interface, the processor. Dammit so much...

Re:WANTED: 1U low-power rack server (0)

Anonymous Coward | more than 3 years ago | (#35478094)

Here are all the major SOC type machines in one place. A few do have sata http://specialcomp.com/products.htm

Re:WANTED: 1U low-power rack server (1)

Tim99 (984437) | more than 3 years ago | (#35478266)

How about this? http://excito.com/bubba/products/overview.html [excito.com] Not dual core, but only uses 5 to 10 watts.

I have a couple of the earlier model, and use one as a personal postfix, MAPI, file, web and music server. The other I use for Debian development.

Disclaimer: No relationship with Excito except as a satisfied customer.

Re:WANTED: 1U low-power rack server (1)

inflex (123318) | more than 3 years ago | (#35478410)

A shame, even with 50% off on some, they're as expensive as something like a FitPC2 :(

I'm hoping at some point we can see a $99 personal server option, maybe cram 4~6 into a 1U rack.

And it's useless. No 64-bit support. (0)

Cyberax (705495) | more than 3 years ago | (#35477544)

ARM _still_ has no real 64-bit support (only something resembling PAE on x86). So building a single-image server beyond 2-4 way is not really feasible.

It's fun that we're having all the past x86 problems with ARM.

Re:And it's useless. No 64-bit support. (1)

jabjoe (1042100) | more than 3 years ago | (#35477578)

Do many websites need a 64bit memory range? I don't think so. Big database servers and the like, yes, but I doubt many website servers.

Re:And it's useless. No 64-bit support. (0)

Anonymous Coward | more than 3 years ago | (#35477600)

Ruby on Rails runs optimally with 128GB RAM.

Re:And it's useless. No 64-bit support. (1)

Cyberax (705495) | more than 3 years ago | (#35477616)

Yes, they do. First, if you're hosting a single web-site on a single server then you'll probably want to install more than 4Gb just because RAM is so cheap now. And you'll inevitably use it (for databases, file cache, etc.). If you're hosting multiple sites on a single server, then you DEFINITELY need more than 4Gb of RAM per server (as it's going to be the limiting component).

Maybe ARM is justified for large Google-style server farms doing specialized work which does not require great amounts of RAM.

Re:And it's useless. No 64-bit support. (2)

GeLeTo (527660) | more than 3 years ago | (#35477648)

ARM's Large Physical Address Extensions (LPAE) allows access to up to 1TB of memory. While I doubt applications will use this, it will allow each virtualized host on the server to use 4GB of memory.

Re:And it's useless. No 64-bit support. (0)

Cyberax (705495) | more than 3 years ago | (#35477662)

PAE-like schemes always have a lot problems. Just read Linus' rants about it :)

Re:And it's useless. No 64-bit support. (4, Informative)

TheRaven64 (641858) | more than 3 years ago | (#35477782)

How about a link to this rant, if you want us to read it? And, if you've got a problem with PAE-like extensions, then I presume you're aware that both Intel's and AMD's virtualisation extensions use PAE-like addressing?

All that PAE and LPAE do is decouple the size of the physical and virtual address spaces. This is a fairly trivial extension to existing virtual memory schemes. On any modern system, there is some mechanism for mapping from virtual to physical pages, so each application sees a 4GB private address space (on a 32-bit system) and the pages that it uses are mapped to some from physical memory. With PAE / LPAE, the only difference is that this mapping now lets you map to a larger physical address space - for example, 32-bit virtual to 36-bit physical. You see exactly the opposite of this on almost all 64-bit platforms, where you have a 64-bit virtual address space but only a 40- or 48-bit physical address space.

The big problem with PAE was that most machines that supported it came with 32-bit peripherals and no IOMMU. This meant that the peripherals could do DMA transfers to and from the low 4GB, but not anywhere else in memory. This dramatically complicated the work that the kernel had to do, because it needed to either remap memory pages from the low 4GB and copy their contents or use bounce buffers, neither of which was good for performance (which, generally, is something that people who need more than 4GB of RAM care about).

The advantage is that you can add more physical memory without changing the ABI. Pointers remain 32 bits, and applications are each limited to 4GB of virtual address space, but you can have multiple applications all using 4GB without needing to swap. Oh, and you also get better cache usage than with a pure 64-bit ABI, because you're not using 8 bytes to store a pointer into an address space that's much smaller than 4GB.

By the way, I just did a quick check on a few 64-bit machines that I have accounts on. Out of about 700 processes running on these systems (one laptop, two servers, one compute node), none were using more than 4GB of virtual address space.

Re:And it's useless. No 64-bit support. (2)

pmontra (738736) | more than 3 years ago | (#35477834)

How about a link to this rant

http://blog.linuxolution.org/archives/117 [linuxolution.org]

Re:And it's useless. No 64-bit support. (0)

Anonymous Coward | more than 3 years ago | (#35478120)

My FreeBSD box runs 4096 threads of apache utilizing roughly 6GB of memory space. This is on 64bit yes.
On the other hand, the only reason it doesn't have 16GB is because I didn't buy it.

For web servers, it's RAM size, followed by disk speed, then CPU that's your limiters. For database, CPU speed, disk i/o, than RAM size that limits you.

And here's the logic:
A properly sized web server will spawn all the threads it can reasonably assume to utilize (4x1024 or 8x512) upon start up, utilizing the maximum amount of ram. This assumes you're not stupidly running mod_perl, mod_php, or any other heavyweight interpreter into the webserver. If you have, then divide by 10 if not 100 (depends on sub modules loaded.)

The web server will then put out as much as it can do as long as there is traffic, and nothing more. It will never page swap, and can be slashdotted without crashing. Linux servers tend not to do that, and it's a long story. The goal is to not touch the pagefile.

On a database server, if it's highly used, is largely stuck on the slowest part (disk i/o) when it has to do full table scans. You solve this by building proper indexes, large enough key and query caches (or memcached if you're really inept at configuring things) and so forth. When you do look at the database server's process list, you should see nothing 99% of the time. If you're constantly seeing more than 2 processes for reasonable requests, then your software or indexes are poorly setup. Most web-based forum software severely fails at this (Vbulletin4 = very suck, phpBB3=much suck) and you need to go hand-tune the indexes or worse, edit the software to use specific indexes.

In the last 10 years, server speeds have barely tripled in Mhz, but RAM and disk IO have greatly improved. If it wasn't for the wear issues, I'd say everyone should be using the flash drives for their databases and webservers.

Re:And it's useless. No 64-bit support. (1)

tepples (727027) | more than 3 years ago | (#35478930)

On a database server, if it's highly used, is largely stuck on the slowest part (disk i/o) when it has to do full table scans. You solve this by building proper indexes

Until you have to use a DBMS that ignores your indexes. For example, MySQL appears unable to make efficient use of an index on a subquery that uses GROUP BY. From the manual [mysql.com] : "A subquery in the FROM clause is evaluated by materializing the result into a temporary table, and this table does not use indexes. This does not allow the use of indexes in comparison with other tables in the query, although that might be useful." The only reason I haven't already rewritten it as a join is that the subquery uses GROUP BY. The workaround I have adopted is to rewrite the query as multiple CREATE TEMPORARY TABLE ... SELECT statements so that as few rows at possible are seen at once. Or is there a better workaround, other than dropping MySQL entirely?

Re:And it's useless. No 64-bit support. (1)

MarkRose (820682) | more than 3 years ago | (#35480998)

A proper webserver only needs 1 thread per core. Each socket/connection should only consume a few KB of RAM at most. A webserver shouldn't use more than a couple dozen MB of RAM at most, not including the OS file system cache. Look into Nginx or Lighttp.

Re:And it's useless. No 64-bit support. (2)

TheRaven64 (641858) | more than 3 years ago | (#35478478)

His complaint basically boils down to the fact that the kernel needs to be able to map all of physical memory, and have some address space left over for memory-mapped I/O. This is a valid complaint for a kernel developer (although Linus' 'everyone who disagrees with me is an idiot' style is quite irritating), but it largely irrelevant to the issue at hand. There is nothing stopping a kernel on ARM with LPAE from using 64-bit pointers internally. You still need to translate userspace pointers, but you need to do that anyway on most architectures (on x86, context switches are insanely expensive, so typically you use a segment for the kernel and run system call handlers without changing the page tables, just making the kernel segment visible by switching to ring 0), so that code already exists in all of the relevant places in the kernel.

Re:And it's useless. No 64-bit support. (1)

Cyberax (705495) | more than 3 years ago | (#35478668)

No, the problem is:
1) Kernel is starved for _address_ _space_ for its internal structures.
2) Userspace is starved for address space, because it has to view all the RAM through a small aperture (think EMS in 80286).
3) Constant address space remapping is costly.

And it doesn't matter that you use 64-bit pointers internally, because you can't address data directly.

Re:And it's useless. No 64-bit support. (1)

TheRaven64 (641858) | more than 3 years ago | (#35479470)

1) Kernel is starved for _address_ _space_ for its internal structures.

This is addressed by using physical addresses in the kernel, as I said. It can use 64-bit pointers, and the compiler emits direct loads and stores that bypass the MMU.

Userspace is starved for address space, because it has to view all the RAM through a small aperture (think EMS in 80286).

Which is only relevant if the process actually wants more than 4GB of address space, i.e. not very often (yet).

Constant address space remapping is costly

True, but this is only required on x86 because the kernel is using its own virtual address space. This is not an issue on ARM.

Re:And it's useless. No 64-bit support. (1)

Theovon (109752) | more than 3 years ago | (#35479104)

I do scientific computing where we regularly use virtual address spaces larger than 4GB. Not all of that is in the working set, of course, but it's often necessary to have that much mapped. One recent example is my leakage power and delay models for near-threshold circuits. I implemented the Markovic forumlas and found them to be too slow. My simulations would take days. So, I figured out the granularities I needed for voltage, power, and temperature, and I implemented those models as giant look-up tables. The leakage power model occupies 4GB of address space all by itself. I just mmap the file into the process and go. Now the simulations take only hours.

Re:And it's useless. No 64-bit support. (1)

TheRaven64 (641858) | more than 3 years ago | (#35479500)

If you are doing scientific computing, then you are not in the target market for a system like this. The virtual address space size is the least of your problems - the relatively anaemic floating point performance is going to cripple your performance.

Re:And it's useless. No 64-bit support. (0)

Anonymous Coward | more than 3 years ago | (#35477896)

If Linus has a rant about it, I would tend to believe that it's a good (if not perfect) idea.

Re:And it's useless. No 64-bit support. (0)

Anonymous Coward | more than 3 years ago | (#35477974)

Especially when it's one where he calls you a moron six times.

Not that offensive people are always wrong, but it's generally true in Linus' case.

Re:And it's useless. No 64-bit support. (1)

GeLeTo (527660) | more than 3 years ago | (#35477976)

Linus' rant is about using PAE in a desktop enviroment, which I agree with (that's why I said that I doubt any applications will use PAE). It says nothing about virtualisation. LPAE will work just fine for VMs.

Re:And it's useless. No 64-bit support. (0)

Anonymous Coward | more than 3 years ago | (#35478864)

He's also anti-micro-kernel. That doesn't make micro-kernels bad.

He is just a guy. A very well respected and brilliant guy, but just a guy nonetheless.

Re:And it's useless. No 64-bit support. (1)

Anonymous Coward | more than 3 years ago | (#35477838)

Utter bollocks. I work for a data centre, and there is no way 4GB is *required* for multiple sites or anything like that. How about one server, running 20-odd Linux Jails, each with between 20-32 sites, all in 2GB.

Re:And it's useless. No 64-bit support. (1)

wvmarle (1070040) | more than 3 years ago | (#35478104)

Instead of virtualising ten servers on a single physical box, you could of course consider running a single server on a single piece of hardware again. And still win power/flexibility wise if you can get your "low-power" ARM board to cost much less than your souped up x86 board. If only because if a single board fails, just one server goes down. Not all ten.

Re:And it's useless. No 64-bit support. (1)

SuricouRaven (1897204) | more than 3 years ago | (#35477760)

Even programs that you wouldn't expect to need much memory often benefit heavily, as any modern desktop or server OS uses free RAM for disk cacheing. Adding more memory means fewer slow, slow disk reads are needed.

Re:And it's useless. No 64-bit support. (2)

Bengie (1121981) | more than 3 years ago | (#35478046)

64bit memory range? Each node is going to have it's own memory slot(s). 120 cores, 4 cores per node = 30 nodes. If you plan to have less than 4GB of memory in this system, how small does each stick have to be when you plug 30 in? ~128mb. Good Luck finding a bunch of DDR2/3 128MB sticks to plug into your 4GB 120 core web server. Anyway, each node needs its own local copy of the data it needs to serve up. If you web page needs ~256MB, each node is going to need the same 256MB of data duplicated, plus any extra overhead. You can't expect all 30 nodes to access the same 2-3 memory slots; that would scale like crap. This is one of the issues you get when scaling via cores. Interconnection bandwidth/latency becomes an issue and you need to use local storage to allow fully independent processing. Once you start getting up into these ranges, you're better off thinking of each node as its own computer with a fairly high speed network.

Re:And it's useless. No 64-bit support. (1)

cb88 (1410145) | more than 3 years ago | (#35478630)

ARM chips almost always used embedded ram right on top of the chip package. So yeah.. they are going to probably have 2-4Gb per chip mounted in a POP (package on package format). Its definitely going to be a more NUMA like architecture the question is... will the separate processors share any memory at all or will they act completely separately.

Re:And it's useless. No 64-bit support. (1)

JackDW (904211) | more than 3 years ago | (#35477742)

It couldn't be an SMP machine though, not with so many cores.

My bet would be that each of the 120 nodes actually is a complete computer with 4 cores and its own memory - linked to the other 119 only via Ethernet. In this arrangement the 32-bit memory limit is not such a big issue. Each individual machine will not be particularly powerful anyway.

Re:And it's useless. No 64-bit support. (2)

the linux geek (799780) | more than 3 years ago | (#35481510)

This kind of arrangement gets brought up over and over - one of the more recent examples is SiCortex, and it sucked. Having a Single System Image is always preferable to a "cluster in a box."

STORIES (527) (-1, Troll)

suhailrizwan (1842792) | more than 3 years ago | (#35477630)

STORIES (527) Be online and have fun of reading online stories in urdu,hindi and pakistani and indian stories.Just go on to http://www.stories.pk/ [stories.pk] and enjoy.

160 more (1, Funny)

Hognoxious (631665) | more than 3 years ago | (#35477642)

Another 160 and that should be enough for anybody!

Re:160 more (1)

Falconhell (1289630) | more than 3 years ago | (#35477828)

Damnit, second last post currently and you beat me to that joket!

x86 instruction set (0)

Anonymous Coward | more than 3 years ago | (#35477692)

why do we need hyperthreading or branch prediction when we have 480 cores?

GET SOME PRIORITIES!!! (-1)

Anonymous Coward | more than 3 years ago | (#35477878)

The worst natural disaster in recorded history occurred less than a week ago, and you people are discussing Calxeda's first ARM-based server chip, designed to let companies build low-power servers with up to 480 cores; as the chip is built on a quad-core ARM processor, and low-power servers could have 120 ARM processing nodes in a 2U box; chips will be based on ARM's Cortex-A9 processor architecture???? My *god*, people, GET SOME PRIORITIES!

The bodies of nearly 10,000 dead people could give a good god damn about the advent of LAN parties, your childish Lego models, your nerf toys and lack of a "fun" workplace, your Everquest/Diablo/D&D addiction, or any of the other ways you are "getting on with your life".

Re:GET SOME PRIORITIES!!! (0)

Anonymous Coward | more than 3 years ago | (#35477982)

I don't give a shit, Nip. Just under 60 millions die a year and I have no intention of going into mourning for each of them.

Re:GET SOME PRIORITIES!!! (1)

Shikaku (1129753) | more than 3 years ago | (#35478038)

And you're posting on Slashdot, instead of flying your private jet to Japan to personally pick up debris and rescue people.

Oh right, only rich people have private jets, a lot planes won't fly to Japan now, and even if you get a flight, unless you are currently in Japan with a car (most public transportation is down where help would be needed, and most Japanese people don't own cars), you'd have to walk to the disaster areas. You can't do anything except donate money and hope.

Grow up and learn that shit happens, and that your sheltered life can be destroyed in an instant, with little other people can do to help.

Re:GET SOME PRIORITIES!!! (1)

.tekrox (858002) | more than 3 years ago | (#35478156)

So basically you want Slashdot to turn into every news outlet on earth right now?
If I want to hear more about any of the current natural disasters, the state of Libya or even what lipgloss Jooolia is wearing this week - I'll turn on the Television or read a news-corporation owned website.

This is Slashdot, News for Nerds - just because a disaster happened doesn't mean we stop wanting to know about anything else.

Jeez.

Re:GET SOME PRIORITIES!!! (0)

zill (1690130) | more than 3 years ago | (#35478588)

Wow you're totally right. I should be busy posting defamatory comments on the Internet like you to help out the Japanese.

leave britney alone! (1)

luis_a_espinal (1810296) | more than 3 years ago | (#35479246)

The worst natural disaster in recorded history occurred less than a week ago, and you people are discussing Calxeda's first ARM-based server chip, designed to let companies build low-power servers with up to 480 cores; as the chip is built on a quad-core ARM processor, and low-power servers could have 120 ARM processing nodes in a 2U box; chips will be based on ARM's Cortex-A9 processor architecture???? My *god*, people, GET SOME PRIORITIES!

The bodies of nearly 10,000 dead people could give a good god damn about the advent of LAN parties, your childish Lego models, your nerf toys and lack of a "fun" workplace, your Everquest/Diablo/D&D addiction, or any of the other ways you are "getting on with your life".

I have inlaws and friends in Japan, and thank God they are all fine. But even if something have had happened to them, what would you expect me, a /. reader, or anyone, to do? To cut my veins and pour ash on my head? What about the rest of the readers. You are just an attention whore looking for a cause celebre to be upset about. Nothing more as your little rant does nothing constructive.

You don't know if people reading this donated for the cause. You do not know anything about anyone here, about what they do or feel, and yet you act as if you would.

There is a difference between mourning and empathy, and shameless and useless "leave britney alone" attention whoring. Guess which one describes you buddy.

the real question (1)

Anonymous Coward | more than 3 years ago | (#35477972)

The real question is, can anyone afford to install an oracle database on that server?

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Create a Slashdot Account

Loading...