SGI Introduces World's Densest Server 341
Twirlip of the Mists writes "Today SGI announced the Origin 3900 server, the world's densest computer. How dense? How about 16 MIPS R14000A processors and 32 GB of RAM in a 4-rack-unit 'superbrick,' for a grand total of 128 processors and 256 GB of RAM in a single rack. That makes the new machine the densest single-system-image computer in the world; it's even denser than most blade systems. Just for fun, the server also includes a whole bunch of 64-bit, 133 MHz PCI-X slots (from 11 up to hundreds and hundreds, depending on configuration). There's coverage of the announcement on ZDNet, CNET, and InfoWorld, as well as on SGI's own site."
SGI's Gettin' Some (Score:2, Insightful)
Re:SGI's Gettin' Some (Score:2, Funny)
Re:SGI's Gettin' Some (Score:3, Insightful)
The 128-processor Origin 3900 lists for $2.9 million. There's nothing "cheaper" about this. Faster, yeah; this is one of-- not "the," but one of-- the fastest computers in the world. And it's the densest. But it's nowhere near cheap.
Re:SGI's Gettin' Some (Score:3, Funny)
Re:SGI's Gettin' Some (Score:5, Interesting)
There are a few blade systems that can squeeze 128 or more processors into a rack, but those are blade systems, not single-system-image compute servers. You can't use a blade server to do the job of an Origin 3900. (Of course, the converse is also true; you wouldn't buy an Origin 3900 to do something you could do with a blade server instead.)
SGI tends to produce exactly what the customer wants. It's just that their customer is more often than not the federal government, or a very large corporation. It's not well-known-- in fact, for a time it was classified-- but SGI designed, manufactured, and sold an entire line of what were basically DSP coprocessor units specifically for Lockheed's satellite division. Called the "tensor processing unit," each one was basically an expansion module for the Origin 2000. SGI built it just like a commercial product, complete with documentation and everything, and manufactured them in large quantities. It's just that you couldn't buy them unless you were Lockheed.
It's only when SGI tries to branch out that they do poorly. I don't know WTF they were thinking when they decided to try selling inexpensive (relative to other SGI products) workstations running NT or Linux. That was just insane. But as SGI strips more and more of that BS away, they get closer and closer to being a sound company again.
Re:SGI's Gettin' Some (Score:3, Interesting)
History speaks pretty clearly about what happens to companies that marginalize their business into making 1-offs for infinite-budget DoD contracts and agencies. Eventually, projects get cancelled, line items in budgets get axed, and whole departments are re-orged into something different.
Cray, anyone ? Cray-Research basically went under when the Cray-3 contract was axed. They were counting on that single-machine to keep the afloat. They futzed around with GaAs custom process and never got it qutie working right, and then the cold war ended and with it the justification for subsidizing a maker of 1-off supercomputers.
(Incidentally, the purchase of Cray is what really broke SGI's back. 50% more employees, 2% more market cap, and the O2k/O3k technology came from stanford, not Cray) SGI bought itself into the supercomputing space with the cray acquisition, but their sales reps didn't know what to sell... T3, vector, or Origin. It bled the company pretty badly.
Nobody argues that right now, there are some things for which there simply isn't any other rational choice besides SGI. In the early 90s, that was "anything with video, at all". Look how that market has all but vanished for them.
The problem is the number of markets for which SGI is the only choice is shrinking and will continue to shrink. Only the institutions that need to be 1-3 years ahead of the curve will pay the huge markup for it. The big advantage of the O3k system is, as you ponit out, the single-system image. But this is only really advantageous for lazy programmers, and when you're talking 3m for a machine to do scientific or simulatino work, i suspect a lot of the code running on these is very custom, and NOT done by lazy programmers. So the brilliant thinking SGI has put into the hardware can sometimes be beaten by domain-specific software. Eg, lets say that MOSIX and 10Gig ethernet advances to the point that you can build a 1024p 512 node cluster, where the backbone (10Gb ethernet) is constructed in the same hypercube fabric as the numalink cables, and MOSIX can with software emulate the memory/process/thread migration that O3k is doing now....
then will 2.9m for a machine still seem justified ?... a 512 node wintel cluster is cheaper than 2.9m if the node cost is under about 5500. How many x86 boxes do you know of that cost 5500.. even with 2 procs, a few gb of ram, and 4 or 5 10GB ethernet controllers (so that each node is n-way connected in the same hypercube fabric that O2k and o3k provide)
Here's what you're missing (Score:4, Interesting)
There are entire classes of problems which cannot be solved fast enough on clusters, but only on single-image systems. Anything that cannot be made into a parallel algorithm falls into that category.
With networked clusters you're always going to have latencies, orders of magnitude higher than with single-image supercomputers.
Sure, perhaps in 10 or 15 years, we're going to have network latencies as small as those of a PCI bus, but i'm not really talking about future that far. Until then, clusters will be slow for certain problems. Deal with it.
Re:Here's what you're missing (Score:3, Interesting)
With networked clusters you're always going to have latencies, orders of magnitude higher than with single-image supercomputers.
While your point abour ethernet latency is valid, you should be aware that, for somewhat more money, you can get 2gb throughput and about 7us latency. More info at myri.com [myri.com].
The gap between supercomputer and desktop is getting narrower each year. Eventually you will buy your computer by the pound.
Re:SGI's Gettin' Some (Score:2)
Re:SGI's Gettin' Some (Score:5, Insightful)
Cray has already taken more than $25 million in orders for the X1, a computer that hasn't even been built yet. Cray has had a rough time, but they're doing just fine.
lets say that MOSIX and 10Gig ethernet advances
What if it does? Bandwidth between nodes isn't as big a problem as latency in that case. No matter how fast-- in terms of bits per second-- your network transport is, you're always going to have latencies that are a million times higher than node-to-node latencies inside a NUMA system like the Origin. Seriously, a million times; we're talking milliseconds versus nanoseconds here. Your dismissal of single-system-image designs in favor of cluster designs shows a distinct lack of vision on your part, I'm afraid.
then will 2.9m for a machine still seem justified ?
If you set up the hypothetical situation such that the less-expensive system does everything that the more-expensive system can do, then no, of course the more-expensive system isn't justifable. But that's not reality. SGI can deliver 1,024-processor systems right now. You can call them up and place and order for a 512-processor system right out of their main price list. (Bigger systems are special deals, but the 512-processor configuration has its own part number, just like a workstation or a monitor.)
Two or three years from now, when everything you just described is possible, let's see what SGI has in its price book and revisit the question. I imagine the answer then will be the same as the answer now, just with the facts ratched up a few notches. "Yeah," you'll say, "SGI can deliver 8 kiloprocessors for $3 million, but is it justified? A 2 kilonode wintel cluster is cheaper...."
Re:SGI's Gettin' Some (Score:2)
This is, of course, aside from the fact that I was obviously using the expression "the converse is also true" in the colloquial sense, not the Boolean. Dork.
necessary but not sufficient (Score:2, Funny)
Re: (Score:2)
System Requirements (Score:5, Funny)
~S
No (Score:2)
This is the current generation hardware. Doom II will require _next generation_ hardware.
Re:No (Score:2)
Re:System Requirements (Score:3, Funny)
Re:System Requirements (Score:2)
Densest server? (Score:5, Funny)
Now if only we could test it... (Score:2, Funny)
World's Densest Server (Score:4, Funny)
This record goes to Emmanuel at the little bistro on Rue de Bach just off Blvd. St. Michel in Paris.
Re:World's Densest Server (Score:2)
No it's just a test get back in the kitchen.
Re:World's Densest Server (Score:2)
Dense? (Score:2, Funny)
({:P for the {:P-impaired)
yeah... (Score:2)
Does it include it's own Fire suppression system? (Score:2, Interesting)
Not so hot (Score:2, Insightful)
We also have a Linux rack and this will get pretty hot too. We had to move the Linux rack next to the A/C blower. I can't really say about other vendors but SGI is doing a good job at cooling their stuff.
Are we talking Homer dense? (Score:2)
Or Dan Quayle dense?
D'ohe!
Blade/Origin Comparison (Score:5, Insightful)
Re:Blade/Origin Comparison (Score:4, Informative)
I compared the density of SGI's system to blade systems because those are widely considered to be the densest computers in the world, with something like 90 or 100 individual one-processor computers per rack. This system is not only dense in terms of pure processor count that most-- not all, but most-- blade servers, but it's also got all the advantages of a single system image for HPC applications.
Re:Blade/Origin Comparison (Score:5, Interesting)
Hmmmm (Score:2, Funny)
Pointless in most datacenters (Score:3, Interesting)
These servers are pointless in most datacenters. In order to fill one rack with this much horsepower, you would need at least two empty racks next to it to compensate for the power draw and (much) increased cooling needs. I would argue that the target market for this equipment is government labs, research institutes and universities--not usually starved for floor space.
Re:Pointless in most datacenters (Score:4, Interesting)
Re:Pointless in most datacenters (Score:4, Insightful)
Re:Pointless in most datacenters (Score:2)
Re:Pointless in most datacenters (Score:2)
Re:Pointless in most datacenters (Score:4, Informative)
The value of shrinking it down is (as you allude to) not a real-estate issue, but more about the computing efficiencies of a denser package.
The HP blades (6U) are about 35kW nameplate per rack, with a real load of about 10-11kW. The energy savings of SGI might actually give it some value in comparison!
Favorite Quote (Score:5, Funny)
Procter & Gamble, for example, uses an SGI system to study the aerodynamics of Pringle's potato chips
Re:Favorite Quote (Score:2, Funny)
Inappropriate! (Score:2, Funny)
You'd have a core meltdown that's hotter and does more damage than most nuclear weapons.
<ducks>
I'm not sure... (Score:2)
Not if it does't run a Microsoft server product.
*ducks*
17 watts! (Score:2, Interesting)
Re:17 watts! (Score:2)
Well Playstations (I & II) use them and they are used in many embedded systems because of the low power/decent performance characteristics. On the desktop, they were in a line of NT based workstations put out by NEC, Sony, as well as MIPS itself.
Re:Not only in SGI (Score:2)
Density by flops? (Score:5, Insightful)
Lets cruise on over to the Top 500 [top500.org] and use their handy dandy html list [top500.org] to view 'most powerful chip'. This unfortunately requires a little calc work because they failed to include this number in their table.
#1 NEC Earth-Simulator 35,860.00 GFlops using 5,120 Processors -- WOW!
But that's only 7 GFlops per processor
Now lets look at a little different design
#14 Hitachi SR8000-F1/168 1,653.00 GFlops using 168 Processors -- Hot DAMN!!
This is more like it. They're pulling 9.84 GFlops per processor. With their architecture they could pull off the Earth-Simulator's GFlop rate with 3,645 processors - That's 28% less computer doing the same amount of work. Which means if the Earth-Simulator had been constructed with Hitachi's hardware, they could have been pulling 50,380 GFlops in the same cubic footage.
Now this is all rambling that assumes that the processors are similar in size. Which probably isn't true. But they are also getting more power out of less hardware, and it is rare that THAT isn't a bonus.
Flops are not everything (Score:5, Interesting)
But the flops are not everything. The problem with clusters is the network latency when the nodes talk to each other. That latency is small for your average network application, but immense for a supercomputer trying to make all its CPUs talk together. This is why there are entire classes of problems that cannot be solved properly on clusters (non-parallelizable problems).
As opposed to that, an SGI supercomputer has the inter-CPU latency orders of magnitude lower. Same GFlops per total (same CPU power), but certain problems are solved orders of magnitude faster.
That's the power of latency.
Re:Flops are not everything (Score:2)
Re:Density by flops? (Score:2)
Re:Density by flops? (Score:2)
Note though: the SGI press release states that it only scales to 512 processors. it looks like they are having problems scaling beyond that. It is probably having to do with the interconnect and SSI approaches they are taking (at a guess).
That means that you will see a peak of around 5 teraflops. The density is impressive for that speed. The peak performance and scalability is not. Speaking from the Supercomputing world that is. It is something to be proud of (for SGI), but if they want to take the SC world by storm they need to scale higher. The high end of machines that will be ordered over the next 5 years are going to be in the 100+ teraflop range for peak performance. (re: Blue Planet [nersc.gov])
While most of the market does not care about the very high end systems - they can't afford them - they ARE excellent PR. Bragging rights can go a long way.
Re:Density by flops? (Score:2)
This is so true, and it's really unfortunate. Consumers base too much of their purchases on how impressive something appears, or how they can use it to impress their clients. This is all in light of actual performance of the item in question.
A good example is LCDs instead of CRTs on the print servers at my office. They're in public areas and they "look cool". Unfortunately, they get about 10 minutes of actual use during the day. What a waste of money.
true to SGI style. (Score:2)
I would love to see an SGI server case designed by HR Geiger.
Re:true to SGI style. (Score:2)
Re:true to SGI style. (Score:2)
Amazing.... (Score:2)
I wonder what SGI could do if it had the same number of employees Sun or IBM has.
I think that, once again, they prove that they can provide the community with cool and kick ass products.
Congrats SGI, this is just amazing... Other companies should follow.
Re:Amazing.... (Score:2)
Let's see who buys this and what is performs like when installed. I like the idea of single image OS.
I doubt however it's going to make a dent in the Super Computer top 100. It sure as hell ain't going to beat that Japanese monster.
The Japanese made an order of magnitude increase in processing power and you think this toy from SGI is leading edge? LOL
SGI is rapidly becoming the transmeta of super computer manafacturers. There product fills a very small niche, yet all the stupid kids like you think they're so neat.
As an aside the open critical component in a supercomputer is memory, fast memory, whith out that it matters not a jot how quickly your processors work. So what is the memory bandwidth of this baby?
Re:Amazing.... (Score:2)
12.8GB/s. What, you couldn't read the fucking product info before posting?
Re:Amazing.... (Score:2)
Aggregate, 12.8 GB/s. Actual STREAM TRIAD performance will be considerably higher than that. A 64-processor prototype system using this same architecture and Itanium2 CPUs scored the world record STREAM TRIAD benchmark back in early September. There's little argument that the Origin 3000 architecture is among the fastest architectures in the world.
Re:Amazing.... (Score:2)
SGI will happily sell you a 512-processor machine [sgi.com] if you want one. The innovation in the 3900 is compute power/m^3, not raw power. It just so happens that the Origin 3000 has got the raw power too.
SGI is rapidly becoming the transmeta of super computer manafacturers. There product fills a very small niche, yet all the stupid kids like you think they're so neat.
Don;t write them off so quickly. There are plenty of things that only an SGI can do. There are circa-1993 Indigo2's still on people's desks (being used for things like Gladiator), because even a 2002 PC can't do some of the things they can do. SGI are a niche vendor, true... but so is Mercedes.
As an aside the open critical component in a supercomputer is memory, fast memory, whith out that it matters not a jot how quickly your processors work. So what is the memory bandwidth of this baby?
Put it this way: internal bandwidth in an SGI workstation is 3.2Gb/s. Can your peecee do that?
HEAT? (Score:2, Funny)
Delaying the inevitable? (Score:3, Insightful)
Re:Delaying the inevitable? (Score:5, Insightful)
This isn't SGI finding a new reason to exist. This is SGI going back to what has always been one of its best reasons to exist. Over time, SGI's technical lead in graphics has diminished, fueled primarily by (believe it or not) home computer games. But even now, nobody can touch SGI for high-performance scalable servers like the 3900.
Re: (Score:2)
Re:Delaying the inevitable? (Score:3, Informative)
Not quite true. After all, in 1994 an Indy had better price/performance than a comparable Pentium system... and a Pentium couldn't touch a fully loaded Indy. With better marketing, SGI could have dominated the high end 2D and low end 3D space, driving out Apple and Intergraph, and continued to hold high-end 3D. I agree that NT was a colossal mistake for them, and they aren't recovered from that mistake next.
It's when SGI de-focuses to talk about stuff like PCs with fancy cases or video servers or data mining software that they start to lose their way.
SGI servers are fantastic for large databases, the features that make them great for rendering and number crunching (high memory bandwidth, very fast disk I/O, single system image) can easily be applied to databases. The Origins should be wiping the floor with Sun's Fire range. It's a marketing failure, not a technology failure.
This isn't SGI finding a new reason to exist. This is SGI going back to what has always been one of its best reasons to exist. Over time, SGI's technical lead in graphics has diminished, fueled primarily by (believe it or not) home computer games. But even now, nobody can touch SGI for high-performance scalable servers like the 3900.
It has diminished true, but it still exists. There isn't a PC that can touch the Fuel workstation, for example.
Re:Just changing focus (Score:2)
Cooling Fans = Wind Tunnel (Score:5, Funny)
Anyone see the large image of this thing. It has like 10 6" Wide cooling fans. Walking by this thing will be like walking by a turbine jet engine. I cant' wait for the readers digest " Sucked in to the Origin 3000 how I survived"
http://www.sgi.com/cgi-bin/download.cgi?/newsro
Doesn't RLX make something more 'dense' than this? (Score:2, Informative)
From their website: "The RLX System 300ex chassis holds 24 ServerBlades in 3U and supports the new ServerBlade 1200i." -- and it's even based on Linus's Transmeta chipset!
Not sure how Sun's server can top this... somebody help me out here.
Go read the theory, dude (Score:2, Informative)
Mmmm.. (Score:2)
The US list price for a 128-processor supercomputer with 64GB of memory is $2,937,696.
Will they accept PayPal?
From the article (Score:5, Funny)
Imagine how much the version with 128 MB must cost!
Re:From the article (Score:2)
All of the others correctly say 64GB.
Woah (Score:5, Funny)
Dedicated Application Computing (Score:4, Informative)
Notice how many times the word linux is used...
not that impressed... (Score:2, Insightful)
for a 42U rack, you have 84 processors, with each processor being about two and a half times faster, SPECint2000 1202 vs. 483 and SPECfp2000 1170 vs. 495, with the Hammers in 32bit mode. Each 1U Hammer rack can contain up to 16 GB of memory, which gives a total of 672 GB of total RAM, compared to the 256GB of the Origin 3900. I also wouldn't be surprised if a 42U rack of Hammers ended up costing more like $300,000 than $3,000,000
Not the world's densest (Score:2)
The server may be SGI's densest, but at least as far as processing power, it is not the densest. As a counterexample, the above configuration has four processors per unit. Many vendors sell 1U Athlon servers [hoise.com] in which each unit holds two dual Athlon systems (four processors per unit), and I can assure you that an AthlonMP 2200+ is quite a bit faster than a MIPS R14000 @ 600MHz.
True, those two Athlon systems aren't a single server, but we're talking density here.
Regardless, SGI does have the Athlon beaten hands down on memory per unit.
What about the Connection Machine? (Score:2, Interesting)
It seems that massively parallel computing has gone the way of the Dinosaur what with the advent of more powerful CPUs. But I read that Danny Hillis of MIT and Thinking Machines fame had built a supercomputer called the Connection Machine [barnesandnoble.com] which housed 65,536 procs each of which lived on the same wafer with dynamic ram and were arranged in a 16-dimensional hypercube array. [www.gfai.de] I don't think the old beastie had nearly as much ram as the new SGI (of course, this machine was 80's vintage). But depending on the physical size of the old box, could this have not been the world's densest computer ever?
Grammar (Score:2)
It just bothers me when people use poor grammar.
Just how dense is it? (Score:5, Funny)
Client:
GET / HTTP/1.1
Host: densestserver.sgi.com
Server:
Um... What's that?
Client:
Do you not understand HTTP 1.1?
Server:
Of course I do...?
Client:
Well then,
GET / HTTP/1.1
Host: densestserver.sgi.com
Server:
Okay... Would you like that biggie-sized?
Client:
wtf?
Server:
Oh, you want a web page. Okay, I get it now.
Client:
Great. Now send it, please.
Server:
Send what?
Client:
*sigh* Nevermind.
User:
Huh? What does "500 Server Error: Server too dense" mean?
Densest server has 336 processors per rack... (Score:3, Interesting)
RLX Technologies has a server based on Transmeta Crusoe chip and it can hold 24 CPUs in 3U space, giving 336 processors per rack (and 336GB of RAM and 27TB of HDD :)
See promo here [rlx.com]..
evil Beowulf (Score:2, Funny)
Re:Is it such a good new? (Score:5, Funny)
Correct me if I'm wrong, but wouldn't you want to *increase* heat dissipation?
Re:Is it such a good new? (Score:5, Funny)
When I got up this morning, it was 59 F outside. Now, just after lunch, it's over 65 F. If this trend continues, it will be hot enough to melt lead outside by next spring!
Beware statistical projections.
Re:Is it such a good new? (Score:2)
Re:Just imagine a beowulf .... (Score:5, Funny)
Response: The boys that cried "Beowulf!".
not as dense as mine ! (Score:4, Interesting)
well, on a per mips basis maybe, but then again I could use faster cpu's today.
Re:Heating? (Score:5, Informative)
As far as heat loading goes, the "superbrick" is basically one big wind tunnel, with giant fans on the front and ventilation out the back. It pumps a lot of heat into the room, but the temperature in and around the CPUs is really pretty low. I think it peaks around 35 C.
Re:Heating? (Score:2)
At 17 W/processor, not really. According to one of the many press releases, this is using a 0.13 micron version of one of their older processors clocked at something like 600 MHz. I'd worry about the bus chipset heating up more than the processors.
It's interesting to look at the implications of a design like this. Highly parallel systems tend to be communications-limited, and systems that deal with large workloads tend to be memory-bandwidth-limited in general. All of this points to the processor not being the bottleneck. SGI appears to have designed with this in mind, using processors optimized for power instead of performance to improve density.
Re:Heating? (Score:5, Interesting)
It does. The Bedrock chip is both considerably larger and considerably hotter than the R14000A is. (Bedrock is the memory controller, node crossbar, and "bus" arbitrator.)
As to your other comment, SGI got a lot for their money when they bought Cray back in the mid 90's. They took a lot of good Cray technology-- like crossbar-based NUMA system design principles-- and incorporated them into their large server systems. I believe SGI was the first company-- other than Cray itself-- to break the one-hundred CPU barrier on a single system image. (The T3 series was a monster, but I don't recall exactly how many CPUs you could cram into one.)
I think it was Seymour himself who once said, "A supercomputer is a device for turning compute-bound problems into I/O bound problems."
According to the SGI page (Score:2)
Re:no different... (Score:5, Insightful)
The four-processor, 1-unit server you talked about stops there: at four processors. You can't compare that to a system that scales to be 256 times that size.
Re:no different... (Score:2)
I stand corrected.
Re:Z.... (Score:5, Interesting)
Interesting thing about this system will be, rather than the maximum RAM capacity, the minimum RAM required. The original Origin 3000 required some minimal amount of RAM-- 256 or 512 MB or something-- for every four processors. I'm not sure if this new model has the same requirement, but I'd imagine that it does. (It's an architectural thing. Every node board has to have some RAM on it, because that node board may be nominated at boot time to act as the boot master, among other reasons.)
If that's true, then a 128-processor system would require a minimum of either 32 or 64 GB of RAM, depending on whether you can put 256 MB on a node board.
Re:Superbrick's layout? (Score:5, Informative)
If you know what a first-generation C-brick looks like, imagine squeezing that board into a one-rack-unit form factor and stacking four of them together.
Each superbrick includes four boards, spaced one unit apart, with four R14Ks, the Bedrock, and some RAM. The boards are connected with an internal eight-port crossbar router, making the superbrick a self-contained 16-processor unit. Externally, the superbrick connects to the base I/O brick via XIO+; the base I/O brick contains stuff like the system disk and the first 11 PCI-X slots.
I'm not positive how the superbricks are configured. Theoretically, you can partially populate them in one-node increments (meaning 4 CPUs and some RAM), but SGI may or may not sell them that way for manufacturing and QA reasons.
I believe the CPUs come with 8 MB of s-cache each.
The CPU-to-CPU and CPU-to-RAM bandwidths vary depending on the topology you're crossing, but I believe the minimum is 1.6 GB/s unidirectional, or 3.2 GB/s bidirectional. Intra-node bandwidths are somewhat higher, I believe.
No, the CPUs are regular single-core MIPS R14000As. They're tiny chips that don't consume much power, so you can really squeeze 'em in there.
Keep an eye on techpubs.sgi.com, because SGI will be releasing the developer and owner docs for the new system there shortly. (By "shortly" I mean as soon as a few hours or as long as a few weeks, depending on when the docs get released.) You'll find all the technical data you want when those docs go up.
Re:Don't Imagine a Beowulf Cluster (Score:2, Funny)
I'd also like to mention that I enjoy Subway, despite their lack of 'piping hot grits' as a menu item, and give a shout out to the mods, my paint-huffing homeys keeping it real out there in Internet land.
Re:SGI (Score:2)
I guess you can still use the Octane as a space-heater, though. That's a plus.
Re:SGI (Score:2)
Re:Even more density (Score:2)
Re:SGI Wishes it has the densest server... (Score:2, Informative)
Re:no. read the specs (Score:2)
I'll say no (Score:2, Funny)