Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Intel Salivates Over Virtual World Processing Demands

Zonk posted about 7 years ago | from the put-your-tongue-back-in-your-mouth dept.

Intel 52

CNet has up an article looking at the lucrative virtual world market for processor companies. An Intel developer forum held in San Francisco this week highlighted the opportunities for selling hardware to both consumers and vendors in the VW marketplace. "[Chief Technology Officer Justin Rattner] showed statistics that indicated a PC's processor bumps up to 20 percent utilization while browsing the Web, while its graphics processor doesn't even break above 1 percent. But running Second Life--even with today's coarse graphics--pushes those to 70 percent for the main processor and 35 to 70 percent for the graphics processor, he said. The Google Maps Web site and Google Earth software pose intermediate demands. Running a virtual worlds server is vastly more computationally challenging, though, when compared with 2D Web sites and even massively multiplayer online games such as Eve Online. An Eve Online server can handle 34,420 users at a time, but Second Life maxes a server out with just 160 users."

cancel ×

52 comments

Sorry! There are no comments related to the filter you selected.

What really stiffens their niplples... (3, Insightful)

jollyreaper (513215) | about 7 years ago | (#20687281)

...is the lack of optimization in the code. "Yes," says Br'er Intel. "Please throw more processors at the problem! Optimization is for pussies!"

Re:What really stiffens their niplples... (1)

moderatorrater (1095745) | about 7 years ago | (#20687391)

Agreed. What's so different about Second Life that means eve online can get between 200 and 300 times the people on their servers? It definitely sounds like the code to me.

Re:What really stiffens their niplples... (4, Informative)

Andy Dodd (701) | about 7 years ago | (#20687507)

Nah, the article is just plain wrong and uses differing meanings for "server".

SL - "Server" appears to mean a single CPU or box
EVE - "Server" is used to describe the entirety of the Tranquility cluster, which has at least 150-200+ dual or quad-core blades that handle the solar systems, plus some serious database servers.

EVE can achieve around 150-200 on a single machine before things start getting laggy, things get massively painful in the 500-700 range, and much above that and nodes start dropping. EVE has an architecture limitation in that processing for a given solar system cannot be spread across multiple CPUs, so if a single solar system in EVE has 200+ players, they're all on the same CPU. Meanwhile, 10 systems with 5 users each will likely share a CPU, and 50 systems with zero users probably also share.

Re:What really stiffens their niplples... (2, Insightful)

BlowHole666 (1152399) | about 7 years ago | (#20687393)

Well you do not know what type of background processing Eve is doing and what type of background procession Second Life is doing. If the server for Eve is just recording your position and health while Second Life is recording, position, heath, money, surroundings, friends online etc, one is doing more processing then the other. Optimization can only fix so much. While the problems get harder you have to increase the number of processors and speed. Just like you can not run Fedora 6 in a GUI on a computer with 64mb of ram and 800mhz. Maybe Fedora should optimize their code?

Re:What really stiffens their niplples... (1)

Clanked (1156473) | about 7 years ago | (#20688057)

Wouldn't removing the constant calls to check your money/friends online/etc be considered optimization? I don't believe those things are constantly required to be checked. So basicly, you just stated Eve is optimized, while SL isn't.

Re:What really stiffens their niplples... (2, Insightful)

Unoti (731964) | about 7 years ago | (#20689477)

Users can create objects, and put scripts into those objects. They routinely do this. All those scripts run concurrently. So while it might not really be 'necessary' to run those scripts, they make the world what it is. Say I have a dragon avatar. It might seem silly to have 80 scripts running on my avatar's body at a time, but those scripts let me move like a dragon, blow smoke rings out of my nose at regular intervals, and so on. The content is created by the users, so that reduces the kind of optimization that can be done. But it also opens up a tremendous world of possibility.

Re:What really stiffens their niplples... (1)

Brian Gordon (987471) | about 7 years ago | (#20690007)

Wait a second, you can write scripts that are executed on the server? What's to keep you from blowing 10000 smoke rings per second and crashing the server?

Re:What really stiffens their niplples... (1)

jandrese (485) | about 7 years ago | (#20692157)

Your scripts get a limited number of cycles in which to execute during each "tick" of the server, however it was still trivial (at least last time I messed with it) to create self replicating objects that quickly crash the server so hard that they force a rollback. A lot of developers have even done that accidentally, and griefers use it from time to time. However, it's not a particularly smart thing to do since crashing the server is an easy way to be banned from SL (or at least put out in the cornfield).

Re:What really stiffens their niplples... (1)

Brian Gordon (987471) | about 7 years ago | (#20692501)

Wow. We all know that stops griefers.

Re:What really stiffens their niplples... (1)

Amouth (879122) | about 7 years ago | (#20688959)

Just like you can not run Fedora 6 in a GUI on a computer with 64mb of ram and 800mhz. Maybe Fedora should optimize their code?
if that is true then yes..

Re:What really stiffens their niplples... (1)

togofspookware (464119) | about 7 years ago | (#20689033)

> Just like you can not run Fedora 6 in a GUI on a computer with 64mb of ram and 800mhz.
> Maybe Fedora should optimize their code?

I ran Windows 98 on such a machine for a long time with no problems.
Sounds to me not like an 'optimisation' problem, but maybe Fedora could do less fancy and unnecessarry processor-eating computations. -_o

I hope you're kidding about this... (1)

argent (18001) | about 7 years ago | (#20693833)

Just like you can not run Fedora 6 in a GUI on a computer with 64mb of ram and 800mhz.

You can't? Even if you run it headless? Why the hell not?

Re:I hope you're kidding about this... (1)

argent (18001) | about 7 years ago | (#20693843)

OK, I missed the GUI, but even then if you ran Windowmaker or TWM it should work. You don't have to use Enlightenment or whatever insane memory hog window manager they use.

s/An Eve Online/THE Eve Online/ (1)

A beautiful mind (821714) | about 7 years ago | (#20687375)

As there is only one gaming instance in Eve Online.

Now that I think of it Second Life is a single instace too, so in this case to make a fair comparison it would be more apt to say that Eve Online handles around 100 users with what could be considered normal performance on a single (solar system) server.

Re:s/An Eve Online/THE Eve Online/ (1)

Broken scope (973885) | about 7 years ago | (#20687411)

Eve is one large cluster of servers, certain places like Jita(Major trade center) tend to have more of the cluster dedicated to them than other parts.

Re:s/An Eve Online/THE Eve Online/ (2, Interesting)

A beautiful mind (821714) | about 7 years ago | (#20687493)

No. I actually happen to know a great deal about their setup and that solar system handles around ~600 users on average with great strain. Due to the bad architectural legacy Eve Online has (they started developing around 1998) only one machine is responsible for a given solar system and their server side code doesn't have multiprocessor support.

This causes lots of lag in high load/usercount systems as they cannot scale by putting in more hardware with increased demand, to the point which I wouldn't call a normal user experience.

Errors in article? (3, Informative)

Andy Dodd (701) | about 7 years ago | (#20687409)

I find it hard to believe that SL doesn't allow more than 160 concurrent users to log in simultaneously. 160 users per CPU or per chassis blade, maybe, but 160 total all at once or even 160 per shard?

EVE does not have 34,000 people on one server. One shard which people call a "server", but the Tranquility cluster is some SERIOUS hardware. I think they're up to something like 160-200 dual or quad-core blades, at least.

How do you define "server"? (1)

RingDev (879105) | about 7 years ago | (#20687845)

The article could be accurate so long as SL and EVE have their own definitions for what a 'Server' is. The more important question IMO is, how many people can they have for $X in hardware.

-Rick

Re:Errors in article? (1)

bockelboy (824282) | about 7 years ago | (#20688501)

I think they're up to something like 160-200 dual or quad-core blades, at least.
So, a small to medium university cluster? I doubt the server purchases for Second Life or Eve online from the last year mount up to a tenth of a percent of Intel's weekly sales. It's the *clients* they are thinking about.

Hell, Intel should be *donating* CPUs to EVE online then. If buying a company 200 blades means that 34,000 users upgrade their game boxen regularly, the investment probably pays for itself.

A big handful of info (3, Informative)

Caerdwyn (829058) | about 7 years ago | (#20688861)

Second Life is not sharded. It's clustered (they refer to the cluster as "The Grid"). Everyone is on the same grid (though there is a teen grid and a test grid, they aren't used much, and they are 100% independent except for authentication/transaction systems).

In Second Life, the game world is broken up into "sims"... sections of the virtual world that represent 256 x 256 in-game "meters". Each sim has its own master process, two of which run on each server within a cluster... everything that goes on within the sim stays on that server, except for "global" systems: inventory, monetary transactions, group/private IM, login authorization, assets (textures, sounds, etc). When you walk/fly/teleport from one sim to another, you are going to another server. Frequently, this is a painful process, and you can experience long delays or dropped connections if the destination server is unable to take in your session.

There are no global "game rules" per se; the base systems are movement (fly, walk, teleport, with collisions detected), lighting, spatialized sound, object placement, object and state delivery to the client (including animations, other players, textures, sounds, etc.), a primitive physics engine, and an advanced scripting system. If a combat system is in place on a given sim, it's been written by a player. (the built-in crude "combat" system really isn't used).

Because of the very high overhead of script processing, the pipe dream of player-created "mini MMOs" has never materialized. There are some imbedded games, but for performance reasons they don't scale, and because of the small size of sims their scope is very limited.

There are two types of sims: mainland and private. Mainland sims are run by Linden Labs, while private sims are leased by and run by individuals (I have two). Mainland sims have more restrictive conditions and behavioral rules; private sims are largely up to the sim owners, though there is still a minimum AUP for every location, be it mainland or private.

Second Life uses an OpenGL-based client package which has recently been open sourced. Because Second Life is connected directly to your credit card read-write (you can buy and sell in-game objects and services with "Lindens", in-game currency... and buy and sell Lindens for real cash, hence the recent crackdown on online casino operations which were de facto unregulated real-cash systems), there are significant hazards associated with a client build from any source other than what Linden labs has vetted.

One significant shackle to Second Life is the fact of player-created content: when SL releases a feature, players build around that feature's abilities AND limitations. If a bug fix changes how objects are rendered, etc., then it will break player-content that has worked around (or even incorporated) that bug. SL therefore has very limited room in which to improve things; given that their entire proposition is "it's player-created content", breaking player-created content breaks EVERYTHING. Once a feature ships, it's more or less graven in stone. Optimization becomes a nightmare.

Second Life uses blade servers running AMD processors provided by a company called Silicon Mechanics.

Comparing Second Life to Eve or WoW is apples-and-oranges. SL has much more in common with a chat room system that happens to have 3-D rendering and animations than with a modern MMO; its simply a different thing.

SL claims about 9 million accounts and 40,000+ simultaneous connects at peak usage. There is some controversy about the validity of these numbers, as most of those accounts are free/unverified, and "camping" is a widespread practice (characters logged in and idle to artificially boost traffic numbers for sim and business owners so they appear higher in in-game searches; sim and business owners often pay micropayments to campers in return for boosting their traffic ratings).

There are many other SL-style "sandbox"/microtransaction games currently in development on the premise of "great idea but Linden Labs is running it poorly enough that there is opportunity for others in the same product type". I happen to agree; the challenge is for one of these potential competitors to gain critical mass and learn from Linden Labs' mistakes.

Don't dump on the OS client... (1)

argent (18001) | about 7 years ago | (#20690023)

Because Second Life is connected directly to your credit card read-write, there are significant hazards associated with a client build from any source other than what Linden labs has vetted.

The recent URL exploit on Windows (it was a command line parsing bug, so wouldn't have impacted Mac or linux since applications don't parse their own command lines the same way) wasn't from any open source build. :p

Re:Don't dump on the OS client... (1)

Caerdwyn (829058) | about 7 years ago | (#20690393)

I never claimed that any particular exploit to date was the O/S client. It is, however, a fact that malicious code embedded in an unchecked build of the client can drain your Lindens, all at once or a few at a time.

Buy $9.95 worth of Lindens. Why $9.95? It's the subscription amount so won't look suspicious at first glance assuming you ever look at your transaction history (most people don't).
Suppress the purchase confirmation dialog while sending the confirmation.
Transfer the Lindens to the fraud-recipient avatar (one of many free accounts in the fraudster's collection).
Suppress the transfer confirmation dialog while sending the confirmation.
Launder the Lindens via whatever method (strings of micropurchases, etc.) to the true recipient of the stolen money. Make sure that some are purchases from well-known upstanding vendors to introduce uncertainty that something wrong is happening. 20 percent should do.
Cash out to a stolen credit card from the final-destination avatar.
Withdraw the exact amount of cash you just transfered in via whatever method. End of the month, the stolen credit card victim doesn't realize anything is wrong, since the bottom line matches expectations, and few people check line-items if the bottom line is as expected.

Security is the realm of the possible. I think everyone can agree that it's certain someone is going through the source code looking for a way to do something similar to the above, and obfuscate it so it's hard to detect by looking at the code, and if they figure out how, they will attempt to introduce that code, either through a non-Linden Labs client offering "extended features" or by sneaking it in with other legitimate check-ins to try to get it into the "official" build.

To reiterate: at no point did I declare that this has already happened.

Re:Don't dump on the OS client... (1)

argent (18001) | about 7 years ago | (#20691403)

It's certainly possible, and a modicum of care to make sure you're getting a client from someone who has standing in the community is wise, but demanding Linden Labs vet the client is overkill.

All it would take is one person noticing extra transfers in their transaction history to totally expose something like that, and with a direct link from the modified client to the distributor it would be relatively easy to track down.

And it's a lot harder to hide changes in patches to a client than you think, especially when most of them are posted to JIRA *and* actively examined and discussed in the developer mailing list. Anyone smart enough to even have a chance could make more money more easily and safely with a couple weeks contracting on the side.

Finally, there's MUCH safer schemes to get money through applications that support money transfers than a crooked client. Like, you could write a program that does something useful for SL users, and have it install a keystroke stealer in the client they are using. Lots more people are likely to use an add-on program like that, and the connection is indirect. In fact the program doesn't even need to be SL related... some bad guy with a botnet could add a bit extra to his bottom line by adding checks for installed copies of SL. Along with the information he's pulling out from his keystroke stealers looking for credit card info in web forms.

None of which require anything to be open source, since people have done the equivalent in closed-source software. Like Internet Explorer.

Re:A big handful of info (1)

jafuser (112236) | about 7 years ago | (#20696841)

Because of the very high overhead of script processing, the pipe dream of player-created "mini MMOs" has never materialized

The main limitations I encountered in this area were:
  • Scripts are limited to 16 kilobytes of memory in a very high level language which is not very memory efficient
  • Communications between objects is very crude and unreliable
  • Communications to the outside world are very unstable, and bandwidth-limited, preventing you from developing a reliable "core" server which controls the global aspects of the game.
  • Geography is very expensive in SL. A 256x256 meter region will cost you over a thousand dollars up front, and $300 per month in maintenance.
  • Regions are practically limited to about 30-40 users before performace becomes unusuable, especially in a game environment.
  • Timing-dependent actions are often thwarted by latency from several of the above issues.


One significant shackle to Second Life is the fact of player-created content: when SL releases a feature, players build around that feature's abilities AND limitations.

A very good example of this is a "transparency" hack that people took advantage of long ago to make parts of their avatar invisible by masking them with a special texture that caused an alpha bug in the graphics engine at the time.

Now that so much content has been made to rely on this, the developers have to explicitly code around this and implement it in future versions using an extra rendering pass. If this capability had been written explicitly into the graphics engine when people started to demonstrate a demand for it, it could be adapted far more elegantly as the engine evolved.

Another example is the 256x256 meter region design. Many scripts (and presumably server code) have been written now which expect this as a constant, making it impractical/unpopular to expand the region size (e.g., to 512x512 or even 65536x65536).

There are many other SL-style "sandbox"/microtransaction games currently in development on the premise of "great idea but Linden Labs is running it poorly enough that there is opportunity for others in the same product type".

Unfortunately most or perhaps all of these seem to be designed around the idea of creating your own turnkey MMO instead of a generic realtime dyanamic platform like SL. I do look forward to the day that another competing service opens up though, as I have abandoned hope for SL ever opening a second system that is completely redesigned based on lessons that they learned the first time around.

Sounds like it. (2, Informative)

argent (18001) | about 7 years ago | (#20689965)

I find it hard to believe that SL doesn't allow more than 160 concurrent users to log in simultaneously.

There's no "shards". The world is contiguous: you pause less than a second crossing from one sim to another and it's even possible to fly planes across multiple region boundaries at 25 meters a second (hitting a new region every 7-10 seconds depending on the direction you're flying) without losing control... you *can* still "outfly" the sims and crash but it's gotten a lot better than it has been.

Typically the SL server farm (grid) supports 20-30,000 users concurrently, and seeing over 40,000 isn't unusual.

It's a lot more servers than an Eve cluster but its also doing a lot of physics on in-world objects written and programmed by amateurs.

Re:Errors in article? (1)

jafuser (112236) | about 7 years ago | (#20696645)

Generally, each 256x256 meter region of SL runs on it's own core (it used to be one region per CPU prior to multicore hardware). This limits each region to about 40 or so users, but the entire grid of regions is theoretically able to scale indefinitely. In reality though, there are grid-wide services that need to work too (e.g., the asset server), and those have some serious problems scaling with a growing population.

Unfortunately, SL was developed in a very "duct tape and bailing wire style" early on, so they have some long-term legacy issues that they have been struggling with for quite some time. At this point, I don't think refactoring can fix their problems; a complete re-write using a much more elegant design may be in order. I personally think that the developers are too set in their ways to make this happen though, so I look forward to a new company to come along some day to evolve the virtual reality platform.

Wirth's Law (3, Funny)

pizza_milkshake (580452) | about 7 years ago | (#20687421)

"Software is decelerating faster than hardware is accelerating."
http://en.wikipedia.org/wiki/Wirth's_Law [wikipedia.org]

Re:Wirth's Law (1)

jo42 (227475) | about 7 years ago | (#20691757)

I blame it all on OOP (i.e. Java) programming...

Re:Wirth's Law (1)

sg7jimr (614458) | about 7 years ago | (#20697819)

While Object Oriented Programming is an easy target because it's an unnecessary-fluffy-overhyped process layer put on top of traditional structured programming (that did the job quite well when followed), and is a process layer which tends to overcomplicate programming for no real benefit and in fact is counterproductive because it hides how stuff actually works and gets in the way of debugging and optimization...

(Yes the bloated and inefficient sentence structure is intentional and reflects the topic)

I think it's more a mindset than a methodology that's at fault. I remember in Computer Science classes being told by the professor that I should not be so worried about memory and computational efficiency, that I was ignoring the fact that hardware kept getting better and cheaper and would keep up. I thought he was incredibly dense because in my opinion efficiency is always desirable - you can always find another use for spare CPU cycles or spare memory, and an efficient program is always going to perform better. But a huge number of programmers have probably been taught this way.

Kohls spooges over real world clothing demands (3, Insightful)

Sciros (986030) | about 7 years ago | (#20687465)

from the put-your-cock-back-in-your-pants department

The other day Kohls ... business peoples.. got together and talked about how people wear way more clothes when they go outside than when they stay at home. "This whole 'real world' is frigging nuts as far as how much clothing the average person needs to wear when being active in it." Turns out that performing simple tasks like scratching one's belly or sitting around doing jack squat requires no more than a pair of shorts. But demanding real world tasks like walking outside and buying groceries requires no less than 200% more clothing. "We are gonna make a killing with this new realization," said a Kohls business dude to a hobo on the street pretending to be a news correspondent.

Re:Kohls spooges over real world clothing demands (1)

vux984 (928602) | about 7 years ago | (#20687651)

Turns out that performing simple tasks like scratching one's belly or sitting around doing jack squat requires no more than a pair of shorts.

Uh-oh. /looks around for shorts...

At least it explains why my belly scratching was all laggy. I didn't meet the requirements.

Re:Kohls spooges over real world clothing demands (1)

Sciros (986030) | about 7 years ago | (#20687723)

Hehehe that's right I don't want to picture naked slashdotters sitting around at the compootar scratching their hairy bellies. It's bad enough with just shorts.

And we close in to the demands of AI (1)

CrazyJim1 (809850) | about 7 years ago | (#20687557)

One of the core components of AI is that it runs a virtual world that is the imagination of our real world. It doesn't have to be complete, but only know enough to get around. It is funny how games spur development of faster computers. Hasn't it been this way ever since arcade games? All those quarters didn't go to waste. At least thats what I like to think.

!True (1)

Arutema (1159609) | about 7 years ago | (#20687619)

A single EVE server ("node") handles about 700 users. The EVE universe ("cluster") handles 30,000+ users

Web sites without Flash, maybe... (1)

xxxJonBoyxxx (565205) | about 7 years ago | (#20687693)

PC's processor bumps up to 20 percent utilization while browsing the Web.


Web sites without Flash, maybe.

The "processor" vs. "coprocessor" arguments has been going on forever. Meanwhile, people like me are still happily running Pentium3 systems at home at probably will for the next 5 years.

SL (1)

Tom (822) | about 7 years ago | (#20688755)

160? On the server where you don't even have to render graphics? Either they're running 160 self-aware AIs on their or the SL server code sucks so badly it could've just as well been written in visual basic.

Re:SL (2, Informative)

cowscows (103644) | about 7 years ago | (#20689419)

The way SL works (as per my second or third hand reading), is that the whole world (The Grid) is made up of a bunch of squares of virtual land (Sims). Supposedly each sim is its own server/blade/whatever. That sim is responsible for everything that happens within its virtual land space. This includes dealing with all of the players who are currently in that sim, but also all the interactions of physical objects within that sim, and probably more importantly, all of the scripted objects within that sim. When you create something in SL, you can not only model and texture it, but you can also use the SL scripting language to make objects function in all sorts of ways. You edit the script, it gets compiled, and it gets tucked into the object. Objects can have multiple scripts in them.

While in a sim, SL will share lots of information about what that particular server is dealing with. It's not unusual for some of the more crowded sims to have thousands of scripted objects living in a sim, and that's a lot of little things going on. Many of those scripts were not written by experienced programmers, so efficiency was likely not a big consideration.

They're running metric buttloads of physics. (1)

argent (18001) | about 7 years ago | (#20689849)

400, actually. That's 4 regions (called sims) on a 4-core server with up to 100 avatars per region.

That's with each sim doing concurrent physics calculations for 100 avatars interacting concurrently with 15000 unique objects in a 256x256x768 meter simulated volume, with each avatar running up to 1000 concurrent scripts. Anything from 10 to 1000 objects are independent actors that have to be taking into account for object-object collisions with a 1/45th of a second quantum. Maybe 1000 objects are running their own scripts, ALL THE TIME. And all content created by complete amateurs with no understanding of how to make physics run fast and making no attempt to establish collision zones or otherwise optimize the layout.

So each server is handling the physics for 60,000 objects, maybe 3000 being dynamic objects that cause as much load as a player in a regular combat game, and there's no optimization of the "level".

I don't think they're doing so badly.

Re:They're running metric buttloads of physics. (1)

Tom (822) | about 7 years ago | (#20692965)

Ok, that makes more sense now.

So what they badly need is more sensible physics, and limits for what the amateurs do. No in the sense of what they do, but say some automatic creation of collision zones, boundary boxes, etc.

Or simply a better system. For example, 90% of the objects everywhere I went in SL don't have any visible physics. They're just walls, for example.

Re:They're running metric buttloads of physics. (1)

argent (18001) | about 7 years ago | (#20693787)

Or simply a better system. For example, 90% of the objects everywhere I went in SL don't have any visible physics. They're just walls, for example.

More than that 90% (that would be 1500 physical objects in a sim, that's a pretty heavy load). They still have to be included in physical calculations.

I don't know what kinds of internal optimizations Havok (the physics engine) performs, but the point is it's not able to get any useful guidance in diong so from the builders.

Re:They're running metric buttloads of physics. (1)

jafuser (112236) | about 7 years ago | (#20697077)

You hit on a good point here. Most people in SL are amaeturs. They don't know how to, or are unwilling to expend the effort to optimize. These people don't have the same skill as a team of professional game designers, who tweak every polygon for maximum speed.

Even those in SL with some skill are limited by the tools that they are given.

Professionally-designed games take into account the maximum number of movable objects that the physics engine will have to deal with, and they probabaly design for a maximum number of polygons that might ever appear on the screen at once. SL doesn't specifically try to limit its users in these ways (other than some really minor inconsequential things like 31 primitives per solid physical object), nor does LL (the company that runs SL) provide guidelines or suggestions on improving efficiency in these regards.

Re:They're running metric buttloads of physics. (1)

tonyreadsnews (1134939) | about 7 years ago | (#20697497)

They are working on a better system. Check out: http://wiki.secondlife.com/wiki/Architecture_Working_Group [secondlife.com] A comparison (with pictures!!) of the current setup and what they hope to transition to http://wiki.secondlife.com/wiki/Proposed_Architecture [secondlife.com] Tao Takashi reported this info, but spent more time focusing on the fact that they are trying to develop the whole architecture openly. If done right, I think it could even help games like WOW and EVE perform better (imagine the performance improvement if everything in SL was pre-made and optimized by game professionals for people who just wanted to go there and play?)

Distributed Gaming (1)

halcyon1234 (834388) | about 7 years ago | (#20688903)

I'm surprised they're not trying to capitalize more on this. The piece about the "hard science" of gaming [slashdot.org] was mostly fluff, but it did highlight an interesting point: if you want ultra realistic graphics and physics, you need a crapton of CPU power. Intel, or some other enterprising folks with a lot of computers hanging around, should take up the challenge

When I play WoW (and I use "I" figuratively, since I don't actually), my computer doesn't have to process everything to do with Azeroth. I let Blizzard do that, and all my CPU needs to do is display what I'm looking at. Which is a good thing, because my computer probably can't handle running a WoW server AND draw the graphics AND do everything else it needs to.

Why not take this one step further, and farm out the physics and graphics processing to a remote super computer cluster? Let's say I play a game where the goal is to knock down a building. I want every brick, tile and support beam in that building to be represented by an object that is controlled by a physics engine, which in turn will be able to simulate every stress, strain and force at work. My CPU certainly can't do that-- but if the CalculatePhysics() routine farms out to a beowulf-whatever-- returning to my CPU only the resultant vectors-- maybe its doable.

Re:Distributed Gaming (1)

pafein (2979) | about 7 years ago | (#20689583)

Why not take this one step further, and farm out the physics and graphics processing to a remote super computer cluster?
Bandwidth.

Re:Distributed Gaming (1)

argent (18001) | about 7 years ago | (#20693809)

Bandwidth.

Not to mention latency.

Re:Distributed Gaming (1)

C0rinthian (770164) | about 7 years ago | (#20689645)

Why not take this one step further, and farm out the physics and graphics processing to a remote super computer cluster? Let's say I play a game where the goal is to knock down a building. I want every brick, tile and support beam in that building to be represented by an object that is controlled by a physics engine, which in turn will be able to simulate every stress, strain and force at work. My CPU certainly can't do that-- but if the CalculatePhysics() routine farms out to a beowulf-whatever-- returning to my CPU only the resultant vectors-- maybe its doable.

Isn't that going to put significantly more load on the network code for the game? (It depends on how many actors are actually animated on your end) But I would imagine having a server send realtime status on the thousands of bits of your building as they fly around and interact...well, that would need one fat pipe. If you have more than one player viewing this scene, the amount of traffic going out duplicates per player. It gets even worse when you factor in players interacting with the scene you're talking about, etc etc.

There is a very good reason most online games have a rather small number of dynamic actors flying around at any given time.

Re:Distributed Gaming (1)

mikael (484) | about 7 years ago | (#20689669)

Some companies did propose that, you would have a thin client which connected to the server, and the server would render the final frame then send it back (or just the changes) back down to the client. This worked with itty bitty 3D graphics windows on an X-window server, but on a full-screen HDTV system, you would need to do full movie-style compression on the data. Since, the latest DVD compression methods take hours to perform and reference the last sixteen frames or more, this isn't practical just now.

Re:Distributed Gaming (1)

dbIII (701233) | about 7 years ago | (#20692869)

Let's say I play a game where the goal is to knock down a building. I want every brick, tile and support beam in that building to be represented by an object that is controlled by a physics engine

No you don't. You want to model it as the coarsest objects you can get away with. The mesh does not have to be uniform size and shape - triangles work as well as rectanges and tetrahedrons as well as rectangular prisms. Also the shape of the model can change over time for when you need to model a single brick. This is done in real designs and can model reality quite well.

Real elephants are hairy - but you can same time and it makes no difference in many cases by assuming it is a smooth massless elephant.

Re:Distributed Gaming (1)

argent (18001) | about 7 years ago | (#20693795)

"First assume all lions are spherical..."

EVE Online on One Server (1)

adamkennedy (121032) | about 7 years ago | (#20690617)

If by one server you mean 200+ multicore blades with a 400gig RAMSAN database behind it...

Figures Ignore Performance Bottlenecks (1)

hdon (1104251) | about 7 years ago | (#20691447)

Subject says it all.

That is, if my post were as comprehensive as some of the figures in this story are.

Most web browser are massively memory-hungry. Circular Javascript references have Internet Explorer practically hemorrhaging unreachable allocations, and on Firefox, numbers are often reference-counted and allocated on the heap.

My CPU doesn't break above 20% when browsing the web either. But I'd be getting the same performance I would with four times as much RAM and a CPU that is one fifth as fast.

Don't discount network bottlenecks either. As Verizon rolls out its FiOS service in more areas, and bandwidth keeps getting cheaper, even without paging virtual memory in and out of your hard disk, you should expect to see greater CPU usage.

BS on Stats (0)

Anonymous Coward | about 7 years ago | (#20709771)

160 users per CPU in SecondLife? Give me a break.
You can barely get above 40 before a dedicated isolated island sim crashes (Class IV & V Sims)
Theoretical limit is stated I think.
Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>