Beta

Slashdot: News for Nerds

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

ARM Readies Cores For 64-Bit Computing

Soulskill posted more than 3 years ago | from the exponential-growth dept.

Upgrades 222

snydeq writes "ARM Holdings will unveil new plans for processing cores that support 64-bit computing within the next few weeks, and has already shown samples at private viewings, InfoWorld reports. ARM's move to put out a 64-bit processing core will give its partners more options to design products for more markets, including servers, the source said. The next ARM Cortex processor to be unveiled will support 64-bit computing. An announcement of the processor could come as early as next week, and may provide further evidence of a collision course with Intel."

cancel ×

222 comments

wut (0, Informative)

Anonymous Coward | more than 3 years ago | (#34286756)

hi

Slashdot's ARM wet dreams. (-1, Troll)

Anonymous Coward | more than 3 years ago | (#34286826)

True Windows does not run in ARM. More ARM means more linux. Orgasmic.

Re:Slashdot's ARM wet dreams. (1, Insightful)

bradgoodman (964302) | more than 3 years ago | (#34287188)

MOD UP!

Re:Slashdot's ARM wet dreams. (4, Interesting)

hairyfeet (841228) | more than 3 years ago | (#34287504)

Can you please explain the advantage of ARM over X86 in the server room because this one has me scratching my head. While I'm all for different arches (I have a PPC G3 Mac just so I could play with non x86) I thought the whole point of ARM was it was super low power for mobile devices? while I'm sure cutting down power usage in the server room would not be a BAD thing, considering how much software, both for Windows AND Linux, that isn't for ARM based CPUs I just don't get what the advantage of this would be over say a Bobcat, Nano, or Atom based solution.

Now in mobile I get it, as you can make a cheap iPad knockoff that can get 8+ hours of battery life, but in servers? Maybe there is a use case I don't know of, but when I was setting up servers while power was a consideration it certainly wasn't looked at as a priority over the performance in server roles. How well does ARM handle large amounts of users? How well does it scale with increased demands? While I wish them all the best I just haven't seen a screaming need for these, not when you already have Atom and Nano and are about to have Bobcat and Bulldozer (which from the looks of it will be nice as it has a well built GPU in the Bobcat and Bulldozer so AMD stream coding could be used) all in that same market. What am I missing here?

Re:Slashdot's ARM wet dreams. (5, Interesting)

del_diablo (1747634) | more than 3 years ago | (#34287600)

It should in theory scale better than x86-64 anyhow, and the performance per watt is quite superior, so yes, it has a major place in the server room.

Re:Slashdot's ARM wet dreams. (1, Informative)

cbhacking (979169) | more than 3 years ago | (#34287750)

Mind you, if ARM ever gets there, there will be a Windows version almost immediately. NT is actually quite portable. Historically, it's been on MIPS, Alpha, and PPC, in addition to x86, x64, and Itanium (the currently available ports). There's no reason Microsoft couldn't port it to ARM, and if they see a reason to do so (such as a servers-running-ARM market) they will certainly do so.

Re:Slashdot's ARM wet dreams. (1)

del_diablo (1747634) | more than 3 years ago | (#34287798)

Sure, but they will lose markedshare on the initial wave when the markeds starts appearing. When it finally comes to "5% of desktop(desktop+laptop,+etc) sales and rising?!", then Windows will pull out a version.
Before that, Linux will gain markedshare, most likely, unless they mess up attempts at markeding again.

Re:Slashdot's ARM wet dreams. (1, Interesting)

Anonymous Coward | more than 3 years ago | (#34288018)

Sure, but they will lose markedshare on the initial wave when the markeds starts appearing. When it finally comes to "5% of desktop(desktop+laptop,+etc) sales and rising?!", then Windows will pull out a version.
Before that, Linux will gain markedshare, most likely, unless they mess up attempts at markeding again.

Are you redarded?

Re:Slashdot's ARM wet dreams. (1)

Caerdwyn (829058) | more than 3 years ago | (#34288436)

He's a marked mad.

Re:Slashdot's ARM wet dreams. (1)

Pentium100 (1240090) | more than 3 years ago | (#34288154)

Current Windows software won't run on ARM. Maybe that's not a big concern for Linux, since most of the Linux software is open source and can be compiled on whatever platform you want, but I don't see companies buying ARM computers instead of x86 ones (you know, the ones that still use IE6 because some business app requires it, going to ARM will be even worse, since all their current apps won't work, not just the badly written ones).

Re:Slashdot's ARM wet dreams. (1)

ensignyu (417022) | more than 3 years ago | (#34288468)

Microsoft could write an emulation layer to run x86 code on ARM. Apple created a 68000 emulator when they transitioned from 68k to PowerPC, and then a PowerPC emulator (Rosetta) when they switched to Intel x86 processors.

x86 isn't as easy to emulate, and the performance would probably be terrible, so it's not too likely. But it's an option if some future architecture beats the pants off x86 enough to make emulating x86 for legacy apps run at a reasonable speed.

Re:Slashdot's ARM wet dreams. (3, Insightful)

Pentium100 (1240090) | more than 3 years ago | (#34288984)

Yes, emulation is an option, but I don't think that ARM running x86 emulation layer will be competitive with native x86 CPUs. Didn't this happen to Itanium? Slow x86 performance and AMD's x86-64 resulted in virtually zero market for Itanium.

Re:Slashdot's ARM wet dreams. (3, Insightful)

Confusador (1783468) | more than 3 years ago | (#34289178)

There are a lot of boxes out there doing nothing but serving files and printers, if ARM did start to be popular you can be sure that MS would be sure not to lose that business. And then, once you have the things installed, it suddenly makes sense to write some of your new programs to run on them...

Power and cooling costs (2, Insightful)

xswl0931 (562013) | more than 3 years ago | (#34287808)

In large datacenters, power and cooling costs have become a significant part of the TCO. For smaller server rooms x86 compatibility is probably more important.

Re:Slashdot's ARM wet dreams. (0)

Anonymous Coward | more than 3 years ago | (#34288014)

Did he mention servers? Oh well then.
I laughed.

Re:Slashdot's ARM wet dreams. (2, Interesting)

LWATCDR (28044) | more than 3 years ago | (#34288084)

Funny but in 1990 I bet the said the same thing about Intel.
In any office of say 50 or so people a 64 bit ARM would probably do just fine. NAS and SANs in bigger installations would probably also run very well on a 64 bit ARM. And then one has to wonder just how many ARM cores might fit on a die?
ARM is a much more modern ISA than X86 so it will be interesting to see just where it goes. Trust me if you had told anyone in 1982 that someday there would be an X86 that was faster per clock cycle than a Cray1, ran with a multi ghz clock, and had a 64 bit address space they would have locked you in a rubber room.

Re:Slashdot's ARM wet dreams. (1)

h4rr4r (612664) | more than 3 years ago | (#34288428)

For windows maybe, but what popular software for linux is tied to x86?

I have run lots of stuff on Debian arm.

Performance per watt (1)

msobkow (48369) | more than 3 years ago | (#34288734)

Performance per watt is what matters in the server room, and that's one area where ARM handily trumps x86.

Re:Slashdot's ARM wet dreams. (2, Insightful)

imroy (755) | more than 3 years ago | (#34289054)

...considering how much software, both for Windows AND Linux, that isn't for ARM based CPUs...

CPU architecture doesn't really matter with FOSS - once you have a working compiler, you just compile everything from source. Alright, you need some arch-specific work in the kernel and a few other places too. But by the time you get to end-user applications, all of that is long gone. So I would reply with "almost all Linux software already is for ARM-based CPUs". Or MIPS. Or POWER/PowerPC. Or whatever architecture you want.

And one advantage that ARM's low power/heat could bring is high density. Take a look at the Gumstix [gumstix.com] boards. Now imagine a "blade server" board with 16 or more processors crammed onto one board. You could easily get at least a few hundred CPU's in a 19 inch rack, with each CPU draining less than a watt of power. Now I'm not really sure what could be done with such a system - either do everything over the network (NFS or ATAoE), or equip each CPU with a good lump of flash storage for data and programs. But it would draw very little power and is something to think about.

What's the point? (1)

pablodiazgutierrez (756813) | more than 3 years ago | (#34286852)

ARM has to walk the power way up. I don't see how 64 bit computing would let them snatch server oriented clients. Similarly, I doubt Intel would be wise to deliver chips for the wristwatch market without first having something more compelling for the smartphone.

Re:What's the point? (4, Insightful)

MarcQuadra (129430) | more than 3 years ago | (#34286952)

You don't see the use?

low-latency bare-metal fileservers that consume only a few watts, but can natively handle huge filesystems and live encryption? It's a lot easier to handle a multi-TB storage array when you're 64-bit native, same for encryption. Look at Linux benchmarks for 32 vs 64-bit filesystem and OpenSSH performance.

Do you have any idea how many $4,000 Intel Xeon boxes basically sit and do nothing all day at the average enterprise? If you can put Linux on these beasties, you could have a cheap and inexpensive place for projects to start, if load ever kills the 2GHz ARM blade, you can migrate the app over to an Intel VM or bare metal. I'll bet 80% of projects never leave the ARM boxes, though.

My whole department (currently seven bare-metal Intel servers and five VMs) could run entirely off of a few ARM boxes running Linux. It would probably save an employees'-worth of power, cooling, upkeep, and upgrade costs every year.

Re:What's the point? (1)

PCM2 (4486) | more than 3 years ago | (#34287080)

I'm seeing 64-bit ARM powered NAS boxes, too, dontchathink?

Re:What's the point? (5, Insightful)

TheRaven64 (641858) | more than 3 years ago | (#34287558)

Look at Linux benchmarks for 32 vs 64-bit filesystem and OpenSSH performance

What benchmarks are you looking at? If you're comparing x86 to x86-64, then you are going to get some very misleading numbers. In addition to the increased address space, x86-64 gives:

  • A lot more registers (if you're doing 64-bit operations, x86-32 only has two usable registers, meaning a load and a store practically every other instruction).
  • The guarantee of SSE, meaning you don't need to use (slow) x87 instructions for floating point.
  • Addressing modes that make position-independent code (i.e. anything in a .so under Linux) much faster.
  • Shorter instruction sequences for some common instructions, replacing some short-but-rarely-used sequences.

Offsetting this is the fact that all pointers are now twice as big, which means that you use more instruction cache. On a more sane architecture, such as SPARC, PowerPC, or MIPS, you get none of these advantages (or, rather, removal of disadvantages), so 64-bit code generally runs slightly slower. The only reason to compile in 64-bit mode on these architectures is if you want more than 4GB of virtual address space in a process.

The ARM Cortex A15 supports 40-bit physical addresses, allowing up to 1TB of physical memory to be addressed. Probably not going to be enough for everyone forever, but definitely a lot more than you'll find in a typical server for the next couple of years. It only supports 32-bit virtual addresses, so you are limited to 4GB per process, but that's not a serious limitation for most people.

ARM already has 16 GPRs, so you can use them in pairs and have 8 registers for 64-bit operations. Not quite as many as x86-64, but four times as many as x86, so even that isn't much of an advantage. All of the other advantages that x86-64 has over x86, ARM has already.

Re:What's the point? (1)

KonoWatakushi (910213) | more than 3 years ago | (#34288438)

ARM already has 16 GPRs, so you can use them in pairs and have 8 registers for 64-bit operations. Not quite as many as x86-64, but four times as many as x86, so even that isn't much of an advantage. All of the other advantages that x86-64 has over x86, ARM has already.

The amd64 architecture gets away with so few registers only because it can operate on memory directly. With a load-store architecture, eight registers would be extremely constraining. If anything, ARM should expand the register set for 64-bit mode.

Floating point? (1)

Nicolas MONNET (4727) | more than 3 years ago | (#34288908)

No floating point is ever required for filesystems or encryption.

ARM is very behind (0)

Anonymous Coward | more than 3 years ago | (#34288012)

Low Power - High Performance ... that is already occupied by Cavium, Tilera and others ...

However in the MOBILE space this will have some applications ...

Re:ARM is very behind (1)

the linux geek (799780) | more than 3 years ago | (#34288340)

Tilera is still niche in a lot of ways. Limited memory and I/O bandwidth, as well as lack of an FPU until the TileGX, holds them back.

NetLogic and Cavium are both higher-performance for general server applications - I'd be interested in the potential for a server based on the new NetLogic XLP chip.

Re:What's the point? (1)

gotpaint32 (728082) | more than 3 years ago | (#34288068)

Makes sense but most enterprises are moving towards high density virtualization. This seems to be going the other direction towards specialized appliances rather than.general purpose computing. I could see workstations/terminals going the arm route as well as highly customized and code optimized app servers. But I don't think you'll see many enterprises switching over just yet.

Re:What's the point? (1)

vadim_t (324782) | more than 3 years ago | (#34287484)

Cell phones with ARM CPUs and 512MB RAM already exist. That's a pretty big chunk of the 32 bit address space, so it seems to make a lot of sense to be ready for when it's exhausted.

Re:What's the point? (-1, Flamebait)

Anonymous Coward | more than 3 years ago | (#34287866)

Cell phones with ARM CPUs and 512MB RAM already exist. That's a pretty big chunk of the 32 bit address space, so it seems to make a lot of sense to be ready for when it's exhausted.

What does 32 vs 64 bit cpu's have to do with ipv4 (32 bit) vs ipv6 (128 bit, not 64) address space? The whole IP scheme has nothing to do past the NIC. Even if your talking about storing IP's in memory or whatnot, it's just XXXX bit variable.

Re:What's the point? (1)

sa666_666 (924613) | more than 3 years ago | (#34288290)

What are you talking about? Is this a joke that I'm missing?? The GP is talking about 512MB being a fair size chunk out of the possible address space of 4GB (ie, 32-bit address space). It has nothing to do with networking.

Re:What's the point? (1)

oiron (697563) | more than 3 years ago | (#34288884)

Memory address space, not IP, you dolt!

Re:What's the point? (0)

Anonymous Coward | more than 3 years ago | (#34288322)

40-bit physical addresses sir... not 32-bit thats a Tb not 4Gb

Yes a program can only access memory within its 32bit address space but you can run many programs with a much larger address space. Which means you could have 4 VMs on a 32bit arm using 8Gb each for a total of 32Gb... or something like that.

Re:What's the point? (4, Informative)

forkazoo (138186) | more than 3 years ago | (#34287582)

Arm servers make sense in two places: the small and the giant. They fall down in the medium and large space.

In other words, my personal server currently runs a "low power" AMD Sempron. The CPU uses something like 40 Watts, and it is plenty fast enough for my needs. It makes my RAID work, and it serves stuff over NFS and Samba. There are only ever a few clients, and the CPU spends most days nearly idle. It's a small box with a small workload, and it would work just fine with an ARM CPU instead of an x86. (Assuming the hypothetical ARM system could physically connect my external RAID enclosure.) More CPU wouldn't hurt, and it would occasionally make a few things faster, but mostly putting a Xeon in this box would just make it louder.

In the realm of giant workloads, you have jobs that can't possibly be done by a single machine, no matter the budget. You are looking at needing many hundreds of even the biggest machines you can get. If you have a job that parallelizes that well, doing it with 1000 x86 boxes or 4000 ARM boxes isn't that big of a difference. If the ARM boxes are smaller, cheaper, and lower power enough that it outweighs the fact that you need more of them, then it would be crazy to go with whizzy Xeon boxes instead of Arm. Buzzword enthusiasts will throw labels like "Cloud scale computing" at this sort of thing.

Where ARM falls down on the job is anything that can be done by a 4 core Xeon, up to a handful of 32 Core Xeons. That's a big chunk of what we normally think of as the Server market. ARM doesn't compete very well in this space. When people say that ARM is a ridiculous idea for servers, this middle segment of the market is generally what they are thinking of. A cluster of a dozen little ARM boxes competes rather poorly with a single machine with four Xeon sockets in terms of management overhead, and the amount of effort required to parallelise workloads, and the amount of bandwidth between distant cores. If you have an application that has an expensive per-machine license, that speaks in favor of a single big machine, etc.

So, small office that needs a little NAS server to stash under the secretary's desk? ARM can pwn the market. Giant research institution with some parallelisable code trying to figure out how molecules do something naughty during supernovas? ARM can pwn the market. "Enterprise" level IT in a smallish, but uncrowded data center with adequate, already provisioned power and cooling... ARM may well be suitable in some cases, but it's certianly not an easy sell.

And, relatively common cell phones have 1 GB of RAM. In two years or so, a cell phone with 4 GB of RAM will seem perfectly reasonable. At that point, 64 bit ARM stops being a data center/desktop issue, and is simply required to hold onto the existing ARM core market.

Re:What's the point? (1)

rsborg (111459) | more than 3 years ago | (#34287736)

Arm servers make sense in two places: the small and the giant. They fall down in the medium and large space.

That is only because of the WinTel duopoly of the past decade and a half. Given a decent enough operating system (ChromeOS, OSX-iOS hybrid, Ubuntu Unity) and either a standards based information access model (html/http) or native app-stores, the requirement for x86(-64) disappears and we can liberate ourselves from the Intel processor hegemony... and the world will be a better place for it. (note: Intel isn't going away anytime soon, and neither is Windows... but they won't exist as we have known them for the past decade)

Re:What's the point? (1, Insightful)

Anonymous Coward | more than 3 years ago | (#34288756)

>Giant research institution with some parallelisable code trying to figure out how molecules do something naughty during supernovas?

ARM is competitive with x86 in terms of FLOPS per anything? I don't think so, Tim.

That leaves only the ultra low end for ARM.

64-bit embedded possibilities... (5, Interesting)

MarcQuadra (129430) | more than 3 years ago | (#34286862)

I know folks think it's 'overkill' to have 64-bit CPUs in portable devices, but consider that the -entirety- of storage and RAM can be mmapped in the 64-bit address space... That opens up a lot of options for stuff like putting entire applications to sleep and instantly getting them back, distributing one-time-use applications that are already running, sharing a running app with another person and syncing the whole instance (not just a data file) over the Internet, and other cool futuristic stuff.

I'm wondering when the first server/desktop OS is going to come out that realizes this and starts to merge the 'RAM' and 'Storage' into one 64-bit long field of 'fast' and 'slow' storage. Say goodbye to Swap, and antiquated concepts like 'booting up' and 'partitions'.

What about time_t 64-bit? (0)

Anonymous Coward | more than 3 years ago | (#34286970)

Consider something more important like having time_t up to 64-bit to have dates beyond 2038... among others.

Re:64-bit embedded possibilities... (1)

Monkeedude1212 (1560403) | more than 3 years ago | (#34286992)

I know folks think it's 'overkill' to have 64-bit CPUs in portable devices

I don't think of it that way - I think of it as laying foundations for the future. I would much rather be prepared for when 64bit CPUs in mobile devices is a necessity instead of trying to play catchup when it does. We have the technology, so why not? Like the whole limitted IPv4 Address Space - wouldn't it have been sweet if we switched to IPv6 BEFORE it became an issue?

Re:64-bit embedded possibilities... (2, Funny)

noidentity (188756) | more than 3 years ago | (#34287038)

You can do all this with a 32-bit address space as well. The only thing that must be swapped is the data. All the code can have its own addresses, unless you plan on having more than 4GB of application code on your mobile device. 4GB should be enough for anybody...

Re:64-bit embedded possibilities... (1)

newcastlejon (1483695) | more than 3 years ago | (#34287118)

Don't you count laptops as portable? I realise that you were talking about smartphones but I'm skeptical that having wider use of 64-bit processors will bring about all the cool future stuff you name.

Having decayed into a near-layman when it comes to CS I'm also curious as to why we need those extra bits for said stuff in the first place. It seems that there ought to be a reason why fast and slow storage is separated logically, and I would also say that - on first glance - there's no reason why needing to boot and have FS partitions has anything to do with having a 32/64-bit CPU.

Do please enlighten me, and if you'd be so kind to do it without just listing vague but cool-sounding concepts I'd be ever so grateful.

Re:64-bit embedded possibilities... (1)

0123456 (636235) | more than 3 years ago | (#34287170)

Having decayed into a near-layman when it comes to CS I'm also curious as to why we need those extra bits for said stuff in the first place.

You can't map a complete 2TB disk into a 32-bit address space.

However, I think the idea is kind of bogus, because you don't want every application having access to all blocks on the disk, you don't want every application having to deal with filesystem layout (you can't just write to byte 42 on the disk without ensuring no-one else is going to) and you do want to keep applications' memory separate. And at some point you have to reboot even if just because you upgraded your OS kernel.

Re:64-bit embedded possibilities... (2, Interesting)

newcastlejon (1483695) | more than 3 years ago | (#34287254)

You can't map a complete 2TB disk into a 32-bit address space.

That I can understand.

...putting entire applications to sleep and instantly getting them back, distributing one-time-use applications that are already running, sharing a running app with another person and syncing the whole instance (not just a data file) over the Internet...

This stuff, however, defies comprehension.

Re:64-bit embedded possibilities... (1)

MarcQuadra (129430) | more than 3 years ago | (#34287294)

Just because the distinction between RAM and disk would go away doesn't mean that all access-control goes with it, I don't see how that's implied. An application would just be running in a sandbox that's really just a 'file' sitting in the portion of the address space that's hosted by RAM. If the app doesn't get used, or the kernel needs to 'swap' it to free up resources closer to the 'hot' side of the stack, or gets put to sleep for any other reason, it migrates out of RAM and back to disk.

Same with user sessions... Imagine that you log in and your session was actually brought back for you. Not the way Gnome/KDE does it now by opening apps with the same name, I'm talking about -actually getting your session back-. When you log out, all memory that's owned by you (all apps and data you have open) gets compressed and moved to disk until you return.

Re:64-bit embedded possibilities... (1)

0123456 (636235) | more than 3 years ago | (#34287312)

An application would just be running in a sandbox that's really just a 'file' sitting in the portion of the address space that's hosted by RAM. If the app doesn't get used, or the kernel needs to 'swap' it to free up resources closer to the 'hot' side of the stack, or gets put to sleep for any other reason, it migrates out of RAM and back to disk.

And you don't need a 64-bit address space to do that, if it was really important we could have done it long ago.

Re:64-bit embedded possibilities... (1)

MarcQuadra (129430) | more than 3 years ago | (#34288110)

The limits were always right around the corner with 8-to-32 bit computing. Everyone knew that 4GB hard drives were coming when the 386 came out. With 64 bits of address space, there's 2048 petabytes to play with. That's not coming to a PC or a business near you anytime soon.

Re:64-bit embedded possibilities... (1)

newcastlejon (1483695) | more than 3 years ago | (#34288446)

Again, the ability to access more than x bytes of storage isn't the issue. What I asked - and what you've yet to answer - is how the more widespread adoption of 64-bit processors is going to mean we can do the "cool" stuff you mention*

*I should point out that you still haven't defined what, for example, a one-time-use application is supposed to be, because frankly it sounds like just another meaningless marketing term. After that perhaps you might explain why it needs 64-bits' worth of address space to pull off?

Re:64-bit embedded possibilities... (1)

Jah-Wren Ryel (80510) | more than 3 years ago | (#34287640)

That's called checkpoint/restart and it's been around on 32-bit machines for decades. Its not commonly used, maybe internet ubiquity might change that, but 64-bitness isn't even close to necessary.

Re:64-bit embedded possibilities... (1)

DragonWriter (970822) | more than 3 years ago | (#34287746)

However, I think the idea is kind of bogus, because you don't want every application having access to all blocks on the disk

Just because you have hardware that allows for something to be done doesn't mean that the OS has to allow "every application" to make full unsupervised use of that capability.

OTOH, if the hardware doesn't support it well, then no application -- and no part of the operating system -- can effectively leverage the capacity.

And at some point you have to reboot even if just because you upgraded your OS kernel.

There is no reason that needs to be the case. Even though most operating systems are currently not designed to avoid that need, there is at least one combination of software and service for Linux (Ksplice) which obviates the need for that for Linux kernel updates, and there is no reason that an OS couldn't be maintained in such a way that that was the norm for updates in the form usually distributed rather than something done by a combination of third-party software (automatically handling most of the work) and aftermarket programming effort (adding code to handle the minority of changes that the automated software can't handle.)

Re:64-bit embedded possibilities... (1)

MarcQuadra (129430) | more than 3 years ago | (#34288082)

"there ought to be a reason why fast and slow storage is separated logically"

The entire computing paradigm that we're familiar with is based on the idea that address space is very limited, RAM is expensive, and disk is cheap. That's not true anymore; with 64-bits of address space, you could have access to a 'field' of over 2000 petabytes. That's more than the entire storage available at the research university I work at. You could literally have common address pointers and direct, unbrokered access to vast amounts of data, migrate running apps and sessions between machines, and break free from the limits we have built computer science around.

Re:64-bit embedded possibilities... (1)

newcastlejon (1483695) | more than 3 years ago | (#34288408)

RAM is still expensive, and storage is still cheap. The number of addresses you can, well, address is irrelevant to the problems of making RAM cheaper or storage faster.

You say that you can address a gajillion bytes with a 64-bit CPU. So what? That won't make the spindles spin any faster, or multiply the RAM cells you have and magically do away with NUMA.

Correct me if I'm wrong, but a narrow address bus isn't the reason memory and storage are separate. Even if they are, what's so special about moving from 32 to 64 bits (which we've already done in servers, desktops and portables alike) that makes it a more game-changing improvement than the move from 16 to 32?

Re:64-bit embedded possibilities... (1)

h4rr4r (612664) | more than 3 years ago | (#34288566)

RAM is cheap as hell. $50 will get you as much ram as 32bits can address. So for $100 you are talking twice as much as it could address. Kids these days.

Re:64-bit embedded possibilities... (3, Insightful)

KiloByte (825081) | more than 3 years ago | (#34287238)

n900 may be a nice device otherwise but only 256MB is totally crippling. Most recent smartphones come with 512MB these days. So even for just RAM, having merely "plans" about migrating to 64 bit today is not overkill, it's long overdue.

About your idea of just mmapping everything: the speed difference between memory and disk/flash is so big that the current split is pretty vital to a non-toy OS. I'd limit mmap to specific tasks, for which it is indeed underused.

Re:64-bit embedded possibilities... (1)

oji-sama (1151023) | more than 3 years ago | (#34288892)

I'm pretty sure you don't need 64 bits to get above 512MB. Once we're having phones with 2GB we're on our way to problems.

Then again, I agree that a bit more memory would be nice for heavy(ish) web-use. (Tried to check from Apple's tech specs the amount of memory in iPad for comparison, but curiously this detail is omitted. Found that there's the same 256MB from Wikipedia)

Re:64-bit embedded possibilities... (2, Insightful)

KiloByte (825081) | more than 3 years ago | (#34287466)

Also, the idea of persistent programs has been thought before. Heck, I once came up with it myself when I was studying (>12 years ago), and talked about it with a professor (Janina Mincer). She immediately pointed a number of flaws:
* you'll lose all your data the moment your program crashes. Trying to continue from a badly inconsistent state just ensures further corruption. COWing it from a snapshot is not a solution since you don't know if the original snapshot didn't already have some hidden corruption.
* there is no way to make an upgrade -- even for a trivial bugfix
* config files are human-editable in sane systems for a reason, having the setup only in internal variables would destroy that

Re:64-bit embedded possibilities... (1)

DragonWriter (970822) | more than 3 years ago | (#34287766)

Also, the idea of persistent programs has been thought before. Heck, I once came up with it myself when I was studying (>12 years ago), and talked about it with a professor (Janina Mincer). She immediately pointed a number of flaws:
* you'll lose all your data the moment your program crashes. Trying to continue from a badly inconsistent state just ensures further corruption. COWing it from a snapshot is not a solution since you don't know if the original snapshot didn't already have some hidden corruption.

This isn't any different from traditional saved data separate from the program, really.

* there is no way to make an upgrade -- even for a trivial bugfix

Hot upgrades to running code are very common in certain environments. They certainly are not impossible.

* config files are human-editable in sane systems for a reason, having the setup only in internal variables would destroy that

Having the setup live in internal variables doesn't prevent either having the program itself having a user-friendly method of accessing the configuration data, or having a mechanism where it can receive such updates from the outside.

Re:64-bit embedded possibilities... (1)

MarcQuadra (129430) | more than 3 years ago | (#34288022)

That's because your professor is locked-in to the model we've been using, one that evolved when resources were scarce. That's no longer the case.

-Losing data: Obviously, we're not talking about getting rid of files, just pre-packaging apps in a running state between sessions. Your word processor is going to save the data to a 'file' that by requirement, must live on 'cold' storage (disk-backed or network) or in the cloud. The session of the app would have a little check when it is restored to verify if the file has changed since the app was last awake, and if so, ask you if you want to reload it.

-Upgrades/Patches: If there's a need for one, it is applied to the 'root files' (that are backed on disk) that spawn the app sessions and the app sessions are marked 'dirty' so they can't restore, they must be restarted. There would have to be a system in the hypervisor to track dependencies and mark things as 'dirty' as their dependencies were patched.

-Config files: Would remain the same... See my reference to 'root files' previously. Most software would still be 'files on the disk', but their running state would be saved. In the case of a config change that required the app to be restarted, the app would obviously be able to mark itself 'dirty' and reload the next time you tried to restore it. Software that isn't 'yours', like Google Apps, or game demos, or restricted freeware, or University-owned expensive software, would be distributed over the network in time or session-limited 'stored sessions'.

Re:64-bit embedded possibilities... (1, Insightful)

jcr (53032) | more than 3 years ago | (#34288412)

You should look up Shapiro's work on EROS, and read up on its predecessor, KeyKOS. The problems you list above have been solved, decades ago.

-jcr

Re:64-bit embedded possibilities... (0)

Anonymous Coward | more than 3 years ago | (#34289220)

Fear the Libertarians! If they get their way, the corporations will have nothing stopping them from completing the transition to oligarchy and will fuck over anything or anyone that stands between them and profit and have even less stopping them from doing so than they do now.

Re:64-bit embedded possibilities... (1)

dreamchaser (49529) | more than 3 years ago | (#34288672)

Also, the idea of persistent programs has been thought before. Heck, I once came up with it myself when I was studying (>12 years ago), and talked about it with a professor (Janina Mincer). She immediately pointed a number of flaws:
* you'll lose all your data the moment your program crashes. Trying to continue from a badly inconsistent state just ensures further corruption. COWing it from a snapshot is not a solution since you don't know if the original snapshot didn't already have some hidden corruption.
* there is no way to make an upgrade -- even for a trivial bugfix
* config files are human-editable in sane systems for a reason, having the setup only in internal variables would destroy that

Your professor wasn't very smart then. All of those problems are easily addressed, as other posters have pointed out.

You know what they say. Those who can do, do. Those who can't, teach.

Re:64-bit embedded possibilities... (3, Interesting)

CODiNE (27417) | more than 3 years ago | (#34288118)

Rumor is that's what Apple is working towards with Lion and iOS API's being added to the Desktop OS.

With built in suspend and resume on all apps it becomes trivial to move a running process over to another device. I suppose they'll sell it to end-users as a desktop in a cloud, probably a Me.com service of some kind.

Re:64-bit embedded possibilities... (1)

noidentity (188756) | more than 3 years ago | (#34288266)

I'm still not imagining how this helps. OK, so you've got let's say 64 GB of Flash memory, and say 1 GB of RAM. You don't want to just map all this into each user process, especially not writable, otherwise you'll have corruption. So you only map things in when the process opens the file, where you give it a pointer that it is mmap()'d to. And you can do this with a 32-bit address space as well, as long as the process doesn't open more than a couple of GB of memory-mapped files at once. That doesn't seem like a notable limitation; if it's a media file, it's going to be using normal I/O calls for streaming, not reading it all from memory. Maybe I'm missing something here, but it seems a 32-bit CPU with some kind of memory controller could map files just as well.

Re:64-bit embedded possibilities... (2, Interesting)

VortexCortex (1117377) | more than 3 years ago | (#34288298)

...That opens up a lot of options for stuff like putting entire applications to sleep and instantly getting them back, distributing one-time-use applications that are already running, sharing a running app with another person and syncing the whole instance (not just a data file) over the Internet, and other cool futuristic stuff.

You can do this "futuristic stuff" on both 32 bit and 64 bit platforms. I had to write my own C++ memory manager to make it easy to store & restore application state.

To do real-time syncing applications (esp. games) over the Internet I implemented another layer to provide support for mutual exclusion, client/server privileges, variables with value prediction and smoothing -- which I needed anyhow for my Multi-Threaded JavaScript-like scripting language (Screw coding for each core separately, I wanted a smarter language).

I've also achieved distributing "applications that are already running", (I hear smalltalk has this feature as well as other languages that support VM Images).

It would be nice if these features were supported by the OS, but I'm not waiting around for something I can do right now.

Also: I'm not sure I want all of these features built in to the OS (complexity = potential security holes), esp. when I can achieve them via cross platform libraries and/or an even higher level programming language on our current OSes & hardware.

Collision course or not. (1)

elsJake (1129889) | more than 3 years ago | (#34286864)

It has to be cheap , power efficient , dense (performance per rack unit ) and most of all _stable_ if they want to use it for servers.
If they can manage those details it would be an instant hit , x86 servers are mighty expensive for small businesses , at least around where i live.
Either way some competition would be welcome and is sure to drive costs down.
The other essential problem is getting motherboards to meet the same criteria.

Re:Collision course or not. (0)

Anonymous Coward | more than 3 years ago | (#34287804)

It has to be cheap , power efficient , dense (performance per rack unit ) and most of all _stable_ if they want to use it for servers.

ARM is already in use in medical equipment and industrial motor control. Those are two places where bugs have the potential to actually kill people.
What exactly do you mean when you say that it has to be _stable_ for server usage? Except for desktops and hobby projects servers have the least need for stability compared to all other uses.

lol (0)

Anonymous Coward | more than 3 years ago | (#34287062)

... years after 64 bit computing is already available, ARM thinks they're going to be innovative and do the same... how about some forward thinking and better planning, like moving forward further to 128 bit.

P.S. I'm not looking for some technical analysis based on today's limitations saying 'oh that requires too much power' or whatever; the point of innovation is to change what is possible.

Re:lol (1)

WrongSizeGlass (838941) | more than 3 years ago | (#34287120)

I don't think this is about 'innovation', it's about 'improvement' - 'improvement' measured in reduction. Chips that are more efficient and smaller mean more can be packed into a single blade while producing more 'right sized' CPU horsepower, less heat and lower power consumption (for the CPU's as well as the cooling). This 'improvement' is very measurable in resource consumption and cost, making them 'greener' with a faster return on investment. Another big plus is the CPU's are cheaper, and is if one inner board fails it is cheaper to repair/replace.

Re:lol (0)

Anonymous Coward | more than 3 years ago | (#34287174)

I realize its not about innovation; that was my point.

Re:lol (1)

dreamchaser (49529) | more than 3 years ago | (#34288702)

I realize its not about innovation; that was my point.

And nobody claimed it was, so I fail to see your point. It just is an evolution of the ARM architecture that could open up some more potential applications for said architecture.

Re:lol (1)

Hackeron (704093) | more than 3 years ago | (#34287456)

For example AMD64 which actually has a practical 48-bit address space allows 65,000 times the address space of 32-bit at a small processing overhead. An address space of 128bit is significantly bigger than all the data stored on earth. There is nothing innovative about getting an address space that big, it's just plain pointless at this point in time.

ARM are not trying to be the fastest or most forward thinking, they are trying to be cost effective and power efficient and a 64-bit version of their chip would free developers from the 4GB address space limit of 32-bit - to one over 65,000 times larger.

I can't stress this enough, this isn't a factor of 2x or a factor of 10x or even 1000x larger, this is a factor of 65,000x times larger...

Re:lol (2, Insightful)

shogarth (668598) | more than 3 years ago | (#34287744)

One of the more amusing blog entries from Sun engineers was a discussion of the amount of energy needed to completely fill a ZFS file system. A 128-bit address space isn't just optimistically big, it's "freaking huge!"
http://blogs.sun.com/bonwick/entry/128_bit_storage_are_you [sun.com]

Re:lol (1)

del_diablo (1747634) | more than 3 years ago | (#34287520)

Sorry to break it, but 32-bit is access to roughly 4 gigabytes of data.
33 bit is roughly access to 8 gigabytes of data.
64-bit is roughly access to a data amount that is completely and utterly insane, unlike before we don't have a actual need for it, in contrast to when the 4 gig roof was annoying in supercomputers.
65-bit is twice of that again.
I think 128b-bit is a ridicules dream for the next century, maybe 80-bit or 94-bit or something else will hit instead on supercomputing?
x86-64 is currently limited to measly 52-bits also, which is far far away from entire 64-bit, so ARM would innovate yes.

Re:lol (1)

ChipMonk (711367) | more than 3 years ago | (#34287768)

The address width and the data width are not dependent on each other. On AMD64, the data width (via REX.x prefixes) is 64 bits, although the maximum address bus width is 12 bits shorter.

It is entirely possible, today, to have natural 128-bit registers, accessing a 48-bit address space.

Re:lol (1)

del_diablo (1747634) | more than 3 years ago | (#34287830)

Yeah, but unless x86 happens all over again: A revision which is bumped from 64-bits to whatever the next step is, is a better step than halfassed implention today and all legacy support from 30 year old tech.

ARM cores to take the place of the x86 dominion? (5, Interesting)

moxsam (917470) | more than 3 years ago | (#34287310)

Would be the most exciting revolution to watch. Since it has a totally different design it changes the parameters of how hardware end products can be built.

As ARM cores are so simple and ARM Holding does not have their own fabs, anyone could come up with their own optimized ARM-compatible CPUs. It's one of those moments when the right economics and the right technology could fuse together and change stuff.

Re:ARM cores to take the place of the x86 dominion (1)

0123456 (636235) | more than 3 years ago | (#34287346)

As ARM cores are so simple and ARM Holding does not have their own fabs, anyone could come up with their own optimized ARM-compatible CPUs. It's one of those moments when the right economics and the right technology could fuse together and change stuff.

The problem is... Windows. More precisely, proprietary closed-source software which can't just be recompiled for a new architecture.

The huge amount of installed Windows software out there won't run on ARM, so it won't change the mainstream laptop/desktop market any time soon.

Re:ARM cores to take the place of the x86 dominion (2, Interesting)

del_diablo (1747634) | more than 3 years ago | (#34287544)

Well, considering that somewhere between 60-90% of the desktop marked in reality does not care what their computer is running, so long their got access to a browser and facebook and in worst case a office suit on the side for minor work, it would not really have mattered.
The only real problem is not Windows, it is getting the computers into the mainstream stores to be sold alongsides the Macbooks and the various normal Windows OEM solutions. Just getting it there would mean instant markedshare over night, because only a minority is application bound in reality.

Re:ARM cores to take the place of the x86 dominion (1)

TheRaven64 (641858) | more than 3 years ago | (#34287642)

The problem is... Windows. More precisely, proprietary closed-source software which can't just be recompiled for a new architecture.

Much less of a problem than it used to be. Aside from games, how many closed-source software packages do you run that are CPU-limited? In typical usage, the CPU monitor on my laptop rarely goes over 20%. Even emulating everything, it wouldn't be too slow to use. Modern emulators don't emulate everything though, they thunk out to native libraries for things like drawing. That's how Rosetta works on OS X, for example; OS X ships with stub versions of all of the native frameworks for PowerPC, which call the x86 versions outside of the emulator. When you call a library function from an emulated program, you're calling native code. Even if the emulator only runs at 20% of native speed, the apps typically run at over 50% of native speed, meaning that they use 10% of the CPU instead of 5%. You wouldn't want to run all of your code this way, but for the one or two apps that you can't get native versions of, it's acceptable.

Re:ARM cores to take the place of the x86 dominion (1)

cbhacking (979169) | more than 3 years ago | (#34287786)

I was going to mention a few, but then I realized that almost all of them are .NET based. MS already has a .NET implementation on ARM (for their mobile devices) and I believe Mono also works on ARM.

The remaining ones are MS Office (ported to x64 and PPC), Visual Studio (partially .NET and hopefully somewhat portable), Opera (portable), Foxit (there are other PDF apps even if it's not portable), and probably a few more.

Of course, you can't just ignore games. Relatively few of those are portable, and I happen to care about them quite a bit.

The App-Store revolution will change that (1)

rsborg (111459) | more than 3 years ago | (#34287764)

Apple, Google and Canonical have seen the writing on the wall: Make the apps independent of the ISA, and your platform can go anywhere.

Best way to do this is to provide the storefront, and handle distribution integrated with the OS.

I think the App Store is the biggest software revolution from the 00's ... and it's yet to play out completely.

Re:The App-Store revolution will change that (1)

h4rr4r (612664) | more than 3 years ago | (#34288488)

I think you have no idea what a repository is, otherwise app stores would not impress you at all.

Re:ARM cores to take the place of the x86 dominion (1)

LWATCDR (28044) | more than 3 years ago | (#34288114)

Not that big of an issue in the server space. Sparc and Power5 don't run Windows. And almost all the big server apps already run under Linux so those can recompile without much effort.

Re:ARM cores to take the place of the x86 dominion (1)

gman003 (1693318) | more than 3 years ago | (#34288276)

Apple managed to make the switch from PowerPC to Intel almost seamlessly, thanks to a well-written emulator. Microsoft might be able to do the same.

Windows Mobile (1)

tepples (727027) | more than 3 years ago | (#34288386)

The huge amount of installed Windows software out there won't run on ARM

All the software for Pocket PC aka Windows Mobile (based on Windows CE) already runs on ARM.

Lots of hype. (0)

Anonymous Coward | more than 3 years ago | (#34287682)

Wake me up when ARM has the performance part of the package at least partially addressed. If we want low cost, low power, low performance servers, we already have Atom and Nano, both of which offer x86 binary compatibility and can run the latest releases of WIndows and any Linux flavor of the month, and both of which deliver superior performance (to ARM). Anyone thinking that they are heading on a collision course with Intel any time in the next decade... I want some of what you are smoking.

more multi-core action (1)

ChipMonk (711367) | more than 3 years ago | (#34287810)

Imagine these scenarios:

Building a Linux kernel on a dual-core AMD64: "make -j3 bzImage"

Building a Linux kernel on a quad-core or 8-core ARM: "make -j5 bzImage" or "make -j9 bzImage"

Any bets on which one will finish sooner? The smaller ARM die means the same wafer can hold more ARM cores than any current Intel x86 or AMD cores. The term "embarrassingly parallel" comes to mind.

So where is my ARM desktop yet? (0)

Anonymous Coward | more than 3 years ago | (#34287742)

I guess it is nice that they are contemplating servers and thousand dollar cellphones for overpaid yuppies, but where are the hundred buck low power good enough for surfing ARM desktops or "nettops"? That's what I am really interested in, cheap, good enough, cool running, electron sipping can run linux and not x86 machines.

Re:So where is my ARM desktop yet? (1)

Tapewolf (1639955) | more than 3 years ago | (#34288006)

Re:So where is my ARM desktop yet? (2, Interesting)

Anonymous Coward | more than 3 years ago | (#34288402)

Well, that is interesting and all, just wondering about something a bit more modern. We have 1 ghz ARM in cellphones now,and larger coming, etc, which is enough with sufficient RAM to work as a modern desktop for most uses. I currently still run an old slow single core, works fine, but if I could get comparable performance at only 1/10th the electricity use and eliminate all the fans...see what I mean? Way back in 2008 canonical announced serious ARM support and so on, but still no machines to buy from anyplace. I contemplated using a high end cellphone, but none of them have full keyboard and mice support and are beastly expensive and you can't get any with at least two gigs of RAM, which is defacto about the tipping point for a desktop today between "works" and "tear your hair out".

I mean really, the chips themselves are wicked cheap compared to intel or amd, so where is a plain vanilla ARM based normal form factor desktop, ATX or miniITX or like that? Seems like they could be making a good enough desktop for some serious cost reductions and hit the niche that fits. Now I have an old VIA miniITX board but dang they require super expensive RAM (via specific, generic pc133 stuff do NOT work) just to get it to one full gig, and the 256 megs that I have, just don't cut it. "Good enough" quiet, cheap to run and cheap is what I am after and it sure seems like an ARM solution would fit, just I can't find one, and I have looked now for two years off and on. I don't want a teeny netbook, I want a bog standard desktop cheap machine, just with ARM instead of AMD or Intel.

Re:So where is my ARM desktop yet? (0)

Anonymous Coward | more than 3 years ago | (#34289164)

I have a teeny ARM netbook and I agree with you. It is cool and all, but I can't bear using it for long. Too small and once any component dies, it's all over. Eight of them would easily fit inside a desktop case. Almost no heat generation nor power consumption and fast enough for most uses if the load was shared between the cores. No need for expensive embedded components either. Eight cores and what, 3 Gb of RAM would give you a competitive desktop out of already available tech.
I think the OS is what's holding the investors back. They can't ship Windows nor MacOS and let's face it, a netbook with Ubuntu is one thing, but a full desktop...

Re:So where is my ARM desktop yet? (3, Informative)

h4rr4r (612664) | more than 3 years ago | (#34288494)

You can get a $50 zipit z2 and run debian arm on that. Fits in the palm of your hand and does all that.

Say it isn't so...!! (0)

Anonymous Coward | more than 3 years ago | (#34287980)

No! Not the dreaded, "collision course." Can you imagine the energy that will be release when these 2 behemoths collide!

Quick, call the Intel bunnies and tell them to don their purple Nikes! Phone the folks at the LHC and let them know so they can accelerate their schedules before it's too late and we all die without knowing if the Big Bang really abhors vacuuming or Newton only thought he saw stars after being hit on the head with an apple.

We are all on a one way train to marketing=speak Valhalla, and we're never getting off~!

Seriously, should burn Rupert Murdoch's style book.

64-bit pointers considered harmful (5, Interesting)

jensend (71114) | more than 3 years ago | (#34288080)

This isn't like the 16->32 bit transition where it quickly became apparent that the benefits were large enough and the costs both small enough and rapidly decreasing that all but the smallest microcontrollers could benefit from both the switch and the economies of scale. 64-bit pointers help only in select situations, they come at a large cost, and as fabs start reaching the atomic scale we're much less confident that Moore's Law will decrease those costs to the level of irrelevance anytime soon.

Most uses don't need >4 gigabytes of RAM, and it takes extra memory to compensate for huge pointers. Cache pressure increases, causing a performance drop. Sure, often x86-64 code beats 32-bit x86 code, but that's mostly because x86-64 adds registers on a very register-constrained architecture and partly because of wider integer and FP units. 64-bit addressing is usually a drag, and it's the addressing that makes a CPU "64-bit". ARM doesn't have a similar register constraint problem, and the cost of 64-bit pointers would be especially obvious in the mobile space, where cache is more constrained- one of the most important things ARM has done to increase performance in recent years was Thumb mode i.e. 16-bit instructions, decreasing cache pressure.

Most of those who do need more than 4GB don't need more than 4G of virtual address space for a single process, in which case having the OS use 64-bit addressing while apps use 32-bit pointers is a performance boon. The ideal for x86 (which nobody seems to have tried) would be to have x86-64 instructions and registers available to programs but have the programs use 32-bit pointers, as noted by no less than Don Knuth [stanford.edu] :

It is absolutely idiotic to have 64-bit pointers when I compile a program that uses less than 4 gigabytes of RAM. When such pointer values appear inside a struct, they not only waste half the memory, they effectively throw away half of the cache.

The gcc manpage advertises an option "-mlong32" that sounds like what I want. Namely, I think it would compile code for my x86-64 architecture, taking advantage of the extra registers etc., but it would also know that my program is going to live inside a 32-bit virtual address space.

Unfortunately, the -mlong32 option was introduced only for MIPS computers, years ago. Nobody has yet adopted such conventions for today's most popular architecture. Probably that happens because programs compiled with this convention will need to be loaded with a special version of libc.

Please, somebody, make that possible.

It's funny to continually hear people clamoring for native 64-bit versions of their applications when that often will just slow things down. One notable instance: Sun/Oracle have told people all along not to use a 64-bit JVM unless they really need a single JVM instance to use more than 4GB of memory, and the pointer compression scheme they use for the 64-bit JVM is vital to keeping a reasonable level of performance with today's systems.

Re:Bullshit considered harmful (0)

Anonymous Coward | more than 3 years ago | (#34288394)

It's funny to continually hear people clamoring for native 64-bit versions of their applications when that often will just slow things down.

Yet benchmarks consistently show that despite the overhead of 64 bit pointers, nearly every program is faster on AMD64.

  • http://www.tuxradar.com/content/ubuntu-904-32-bit-vs-64-bit-benchmarks
  • http://www.phoronix.com/scan.php?page=article&item=ubuntu_32_pae&num=1

Re:64-bit pointers considered harmful (1)

gman003 (1693318) | more than 3 years ago | (#34288398)

Several points here:
  • There actually is a method for using 32-bit addresses in programs, but letting the OS address 52-bit space. It's called PAE, and it's been around since the Pentium Pro. It's almost always enabled on Linux and Mac OS X, but isn't available for non-server versions of Windows. So it's been tried, but isn't well-known or used.
  • Currently, 64-bit processors only use a 48-bit address space, precisely for some of the reasons you listed. The architecture is designed to scale up to full 64-bit addresses, but the processors internally use 48-bit addresses, to save on cache, bus width, etc.
  • While most programs don't use more than 4gb of memory, there is one rather significant market segment that does: gaming. Most games released over the past few years can benefit greatly from using significant amounts of RAM. 4gb of RAM is actually considered the minimum needed for modern gaming - many gamers will use 8gb or even 16gb of RAM. That's just one market segment that would use 64-bit. Photo processing is another one - for some time, Hollywood was using Linux and the GIMP for editing, because it was the only way to work with films that used nearly a gigabyte per frame. 3d rendering/modeling/CAD is yet another. While Joe User might not really need 64-bit addressing, there's enough users that DO need it that every OS needs to support it, and it's just less of a headache to use 64-bit throughout than to have some set up as 32-bit, and others as 64-bit.

Re:64-bit pointers considered harmful (2, Insightful)

jensend (71114) | more than 3 years ago | (#34289202)

PAE _is_ frequently used- whenever an x86-64 processor is in long mode it's using PAE. PAE has been around for a lot longer than long mode but few people had much of a reason to use it before long mode came around- not because it didn't accomplish anything but because memory was too expensive and people had little reason to use that much. On a processor where long mode is available there's little reason to use PAE without long mode- long mode gives you all those extra registers etc.

What I and my homeboy Knuth are talking about for x86 has more to do with the ABI than with hardware. As Knuth says some of the first places work would need to be done are the compiler etc and the libc; some OS support would also be required.

Yes, current 64-bit processors can't use more than 40 bits of physical memory or 48 bits of virtual. But the pointers are still the full 64 bits wide, and at no point does the processor store them in anything less than 64 bits. Limiting things to only using 48 bits of address space just simplifies MMUs etc, it doesn't save space. Trying to store other things in the unused bits in a register holding a 48 bit pointer would be more hassle than anybody wants to deal with. I mean, sure, you can do a bunch of bit twiddling to try to put junk in those other bits when you're storing a pointer, but it's going to be more expensive than it's worth.

I don't think there's any game out there which uses more than 4GB of address space in a single process, regardless of the settings you're using. If you can find concrete evidence of one, let me know.

Even finding situations where games really benefit from more than 4GB of total system memory is rare. I haven't seen too much in the way of benchmarks comparing differing amounts of RAM for this year's DX11 games, but I know that practically no games released before this year benefit from more than 3GB of system memory (of the benchmarks I saw the one which really contested that was published by Corsair, and they can't be accused of being indifferent to how much memory people buy). For games that do appear to benefit at their very highest detail settings at extreme resolutions, I'd still like to see evidence that the visual quality is noticeably different from what you get when you bump the settings down a notch and save a gig of RAM.

It's true that people working on films in ridiculously high resolutions and some 3d modeling/rendering/CAD folks may want more than 4GB of RAM available to a single process. But those and the other uses for >4GB in one process are a tiny portion of the overall market and have nothing at all to do with ARM and mobile. And you've vastly overstated the effort it takes to be able to support smaller pointers and the simplifications available if you stick with 64-bit.

Finally- My PDA will support 4+Gb Ram! (1)

gearloos (816828) | more than 3 years ago | (#34288454)

So, now I can really get WinCE Jamming! lol J/K of course...

new debian platform (1)

Cyko_01 (1092499) | more than 3 years ago | (#34288548)

oh god, not another architecture to maintain! This is going to set back the next release a few years for sure!

@mods: it was a joke, I'm not trolling.

Re:new debian platform (1)

h4rr4r (612664) | more than 3 years ago | (#34288588)

Debian is already maintaining an arm branch, so pretty bad joke.

The only real question: (1)

rickb928 (945187) | more than 3 years ago | (#34288926)

When will it run Android?

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Create a Slashdot Account

Loading...