Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!



Navy Debuts New Railgun That Launches Shells at Mach 7

joib Re: Difficult to defend against (630 comments)

Old-fashioned anti-ship missiles can be disabled or destroyed by the defending ship's close-in defenses. This is because the incoming missile is filled with sensitive electronics, guidance systems, explosives, fuel, turbojet engines, stabilizing fins, etc, and is very likely to be damaged or destroyed if hit by a 20mm round from the defending ship's CIWS missile defenses.

FWIW, AFAIU the old gun-based CIWS things are being replaced by missile systems (RAM) since they don't work against modern supersonic anti-ship missiles, to say nothing about railgun projectiles. Think about it, the gun shoots a projectile traveling at about mach 3, roughly the same as the incoming missile(?). So at the outer end of the range (say, 4 km?) it starts shooting. The shells and the missile pass each other at around 2 km, at which point it starts to become pointless to shoot anymore since even if you hit the damn thing (at 1 km, this time) it will more or less continue on its trajectory due to sheer momentum thanks to traveling at mach 3. Simply not enough time for the control algorithm (Kalman filter, or whatnot) to do its magic.

So, how to defend against railguns? Well, you get a bigger railgun! :) Or ballistic anti-ship missiles. But yeah, probably quite hard to do anything to the railgun projectiles after they are launched.

about 5 months ago

Intel's Knights Landing — 72 Cores, 3 Teraflops

joib Re:How does the intercommunication work? (208 comments)

The mesh replaces the ring bus used in the current generation MIC as well as mainstream Intel x86 CPU's. Each node in the mesh is 2 CPU cores and L2 cache. The mesh is used for connecting to the DRAM controllers, external interfaces, L3 cache, and of course, for cache coherency. The memory consistency model is the standard x86 one. So from a programmability point of view, it's a multi-core x86 processor, albeit with slow serial performance and beefy vector units.

about 8 months ago

Oracle Fixes 42 Security Vulnerabilities In Java

joib Re:still with the java? (211 comments)

I'm in Scandinavia and don't need to use any java applets...

FWIW, the only "major" bank in Scandinavia which requires java applets is AFAIK Danske Bank, and they are set to introduce a java-free banking site sometime this summer.

about a year ago

How Google Cools Its 1 Million Servers

joib Re:Immersion Would Be Better For the Environment (87 comments)

The problem is that the waste heat from server is pretty low grade; google runs their data centers hotter than most, and they report a waste heat temp of about 50 C. I would guess that the water they use to cool the air thus gets heated to at most 45 C or so. So it's difficult to use efficiently or economically. At least over here, district heating systems have an input temperature around 100 C (in some cases slightly more, the pressure in the system prevents boiling).

I don't see how this would be any different if the server would be immersion cooled with mineral oil rather than air; in both cases the waste heat needs to be exchanged to water, and even with immersion cooling you couldn't run the system that much hotter without affecting the reliability of the servers.

about 2 years ago

How Google Cools Its 1 Million Servers

joib Re:Immersion Would Be Better For the Environment (87 comments)

That might be a relevant argument if immersion cooling (or generally, liquid cooling of the servers themselves) would somehow be new, innovative, or non-obvious. It's none of those. Secondly, I didn't mean to imply that google would turn around on a dime, but rather that at least some of the newer data centers would use something better if available. The Hamina data center seen in those picture, for instance, was opened in 2012 and seems to use the same air-cooled hot-aisle containment design. I haven't seen "Princess Bride" (assuming it's a movie or play), so I won't comment on that.

about 2 years ago

How Google Cools Its 1 Million Servers

joib Re:Hot aisle containment (87 comments)

Google runs their datacenters at quite high temperatures, the cold side is around 25 C, hot side 50 C. I suppose it would be a pretty unpleasant working environment if the main space of the server rooms would be at 50 C rather than 25 C.

about 2 years ago

How Google Cools Its 1 Million Servers

joib Re:Immersion Would Be Better For the Environment (87 comments)

Current generation google datacenters already have a PUE around 1.1, so whatever they do by tweaking the cooling they cannot reduce the total energy consumption by more than 10 %. Of course, at their scale 10% is still a lot of energy, but the question is how much they could actually reduce that by going to immersion cooling. So far the anecdotal answer seems to be "not enough", since otherwise they would surely already have done it.

about 2 years ago

Teachers Write an Open Textbook In a Weekend Hackathon

joib Re:Ensuring the Quality of Textbooks (109 comments)

Every teacher individually?

AFAIU, yes. (That being said, while I have teached at the university level in Finland, I have no experience of the Finnish primary and high school system from the faculty viewpoint, so you might want to double-check with someone else). Also, consider that there are something like 5 million Finnish speakers, so it's not a particularly large market, so teachers are not exactly going to be overwhelmed by the number of available textbooks. E.g. in physics I think there are about 3-4 book series covering the high school curriculum. I suppose it's a bit different in the US, where one presumably cannot assume a teacher has time to evaluate all the available textbooks. Then again, at least from over here it seems that textbook selection in the US is extremely politicized (can a biology textbook cover evolution? WTF!?) which probably isn't conductive to a good outcome either.

Textbooks must teach to the content of the abitur and the standards being established by the Bologna Process. So, I guess the curricula are well defined. But I'm still surprised that this decision would be left to every teacher individually.

Yes, the Ministry of Education defines (broadly) the curriculum, so it's not like teachers are allowed to teach whatever they fancy. But generally, the large degree of autonomy given to teachers is often seen as one of the reasons why Finland does so well in these PISA tests. Teachers over here are pretty well educated, and it's a well regarded profession. Of course, there are other reasons as well, e.g. Finland is culturally pretty homogeneous and there are quite small socioeconomic differences compared to many other countries. Anyway, it's not like teachers are alone in choosing textbooks, of course they talk with colleagues etc., and professional societies do from time to time publish reviews of the available textbooks, which I assume teachers read carefully.

As an aside, the Bologna process AFAIK covers only higher education (at the polytechnic/university level, bachelor/master/Phd), not high school. Of course, it indirectly covers lower education as well in the sense that it effectively requires that students entering higher education have certain skills.

about 2 years ago

Teachers Write an Open Textbook In a Weekend Hackathon

joib Re:Ensuring the Quality of Textbooks (109 comments)

I think this Finnish group needs someone who is an insider on textbook selection committees to advise them. The last thing these committees want is to embarrass themselves by being seen to recommend a work that was produced in three days. They would lose their credibility, regardless of the quality of the work.

IIRC there are no textbook selection committees in Finland. Teachers are free to choose whichever book they want; or indeed to not choose any book at all and teach the class based on their own material.

about 2 years ago

Solar Impulse Completes First Intercontinental Solar Flight

joib Nice (56 comments)

The thing has a wingspan on 68m, more than an A340. Yet it weighs 1600 kg, about the same as a car. Carbon fiber and epoxy is a pretty impressive combination..

more than 2 years ago

Australian Company Promises Switching Hardware With Sub-130ns Latency

joib Re:Meanwhile... (77 comments)

Some additional points:

- FWIW, Linux finally got rid of the BKL in the 3.0 release or thereabouts.

- Many (most?) 10Gb NIC's are multiqueue, meaning that the interrupt and packet processing load can be spread over multiple cores.

- Linux and presumably other OS'es have mechanisms to switch to polling mode when processing large numbers of incoming network packets.

That being said, your basic points about interrupt latency being an issue still stand, of course.

more than 2 years ago

How Big US Firms Use Open Source Software

joib Re:They are afraid of GPL (116 comments)

Google's webm project is also under a (3-clause) BSD license with an additional patent license on top (which ceases in case of a patent suite, similar to GPLv3, or the Apache license v2).

more than 2 years ago

Japan's Nuclear Energy Industry Nears Shutdown

joib Re:LOL, Bitter Nucleartard (267 comments)

They are going to have to get their electricity from somewhere & generating capacity don't grow on trees.

Unless they burn, um, err, apples? Yes, APPLES!

  1. 1. Solve world energy crisis
  2. 2. Get Nobel peace price
  3. 3. Profit!!!

Man, I'm awesome!

more than 2 years ago

KDE KWin May Drop Support For AMD Catalyst Drivers

joib Teh sky, it's falling!!111 (148 comments)

To recap, KWin currently supports:

  • No compositing
  • Compositing using the 2D XRender interface
  • Compositing using OpenGL 1.x

  • Compositing using OpenGL 2.x
  • Compositing using OpenGL ES 2 (code mostly shared with the OpenGL 2.x codepath)

So what is suggested here is to delete support for compositing using OpenGL 1.x.

Personally, I can hardly blame the developer for wanting to prune that list a bit.

And, if you don't want to see this feature deleted, now is your opportunity to step up to the plate and contribute!

more than 2 years ago

In Favor of FreeBSD On the Desktop

joib Re:People don't want to watch kernel compiling (487 comments)

Haven't you understood; Watching gcc output scroll by for hours on end will make you l33t! That's why Gentoo and FreeBSD users are so hardcore.

more than 2 years ago

Fujitsu Announces 16-core SPARC64 IXfx (and the Supercomputer It Powers)

joib Re:Pricing would be interesting! (68 comments)

What you're looking for is the Green500 list []

Indeed, but the site was down when I wrote my previous reply so I had to resort to the top500 list and calculating flops/watt for the few top entries manually. :)

In any case, as one can see from the list, the best GPU machine manages to beat the K machines by a factor of 1.66, a far cry from the factor of 3-6 you originally claimed. And most GPU machines fall behind the K.

I think the sparc viiifx is quite impressive, it gets very good flops/watt without being a particularly exotic design. Basically it's just a standard OoO CPU with a couple extra FP units and lots of registers clocking at a little lower frequency than usual. No long vectors with scatter/gather memory ops, no GPU's, no low power very slow embedded CPU's like the Blue Genes etc.

I have no knowledge of the design tradeoffs of the individual systems, but I'd say that it's fairly impressive that both the top500 and the Green500 have so many GPUs in the top 10, given that they're both CPU-dominated lists.

Large GPGPU clusters are still a relatively new phenomenon, give it a few years and I suspect you'll see a lot more of them.

more than 2 years ago

Fujitsu Announces 16-core SPARC64 IXfx (and the Supercomputer It Powers)

joib Re:Pricing would be interesting! (68 comments)

Oh? So how come the VIIIFX based "K computer" then, apart from being the current #1 in performance, also beats the GPGPU clusters (with the latest Nvidia Fermi cards) in flops/watt on the latest top500 list: ? And heck, that's on linpack, which should be the pretty much optimal workload for a GPU.

more than 2 years ago

Japanese Supercomputer K Hits 10.51 Petaflops

joib Re:How is it used? (125 comments)

How much does computer time on these things cost? How is the cost calculated? Is time divided up something like how it's done on a large telescope, where the controlling organization get proposals from scientists, then divvies up the computer's available time according to what's been accepted?

On the supercomputer centers I'm familiar with, scientists write proposals which are evaluated by some kind of scientific steering committee which meets regularly (say, once per month), and gives out a certain amount of cpu-hours depending on the application.

Do they multi-task (run more than one scientists' program at one time)?

Yes. Typically the users write batch scripts requesting the amount of resources their job needs. E.g. "512 cores with at least 2 GB RAM/core, max runtime 3 days", and then they submit the batch job to a queue. At some point when there are enough free resources in the system, the batch scheduler launches the job. When the job finishes (or during its runtime) the usage is then subtracted from the quota they were awarded in the application process.

Does the computer run at top power (10pf) at all times, or does the resource usage go up and down?

Usually all functioning nodes are running and available for use, yes. Typically load is around 80-90% of maximum, due to scheduling inefficiencies etc. (e.g. a large parallel job needs to wait until there are enough idle cores before it can start, and so forth).

And lastly, how hard is it to write programs to run on these things? Do the scientists do it themselves, and if so, do the people who run the supercomputer audit the code before it runs?

Pretty tricky. Usually they use the MPI library. The programs are either written by the scientists themselves, or by other scientists working in the same field. The supercomputing center typically doesn't audit code, but may require the user to submit scalability benchmarks before allowing the user to submit large jobs. For some popular applications the supercomputing center may maintain a version themselves (so each user doesn't need to recompile it) and provide some more or less rudimentary support.

more than 2 years ago

Alcatel-Lucent Boosts Copper Broadband To 100Mbps

joib Re:What about latency? (129 comments)

2/3 of the speed of light (in vacuum) is actually about 200000 km/sec. Otherwise the parent poster is correct, though. There is no big difference between the speed of signal propagation in fiber vs copper.

more than 2 years ago

Ask Slashdot: Best Use For a New Supercomputing Cluster?

joib Re:Uh oh.. (387 comments)

One major advantage of IB here is that it natively supports multipathing; there's no need to avoid loops in the graph either by topology or by using spanning trees. This allows one to build networks with decent bisection BW without needing big and expensive über-switches.

There are a few efforts to bring similar capability to ethernet as well, TRILL and 802.1aq, AFAIK neither of which is ratified at the time of writing this.

about 3 years ago


joib hasn't submitted any stories.


joib has no journal entries.

Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>