Intel and SGI Test Full-Immersion Cooling For Servers
This post is incomplete because of problems with html markings. Please see the complete post below (and mod this one down!)
Intel and SGI Test Full-Immersion Cooling For Servers
(sorry for the duplicated posting; the previous one was cut because of problems with the html marks)
In order to obtain a 90% reduction in the energy bill, cooling must account for 90% of the power of the DC. This implies a PUE >= 10. As a reference, 5 years ago virtually any DC had a PUE lower than 3. Nowadays, PUE lower than 1.15 can be obtained easily. As a referecence, Facebook publishes the instantaneous PUE of one of its DC in Prineville, which at the moment is 1.05. This implies that any savings in cooling would reduce the bill, at much, in a factor of 1.05 (1/1.05 = 0.9523).
On the other hand, I believe that this is not the first commertial offer for a liquid-cooled server, Intel was already considering two years ago, and the idea has been discussed in other forums for several years. I can't remember right now which company that was actually selling these solutions, but I believe it was already in the market.
Intel and SGI Test Full-Immersion Cooling For Servers
In order to obtain a 90% reduction in the energy bill, cooling must account for 90% of the power of the DC. This implies a PUE >= 10. As a reference, 5 years ago virtually any DC had a PUE instantaneous PUE of one of its DC in Prineville, which at the moment is 1.05. This implies that any savings in cooling would reduce the bill, at much, in a factor of 1.05 (1/1.05 = 0.9523).
On the other hand, I believe that this is not the first commertial offer for a liquid-cooled server, Intel was already considering it two years ago, and the idea has been discussed in other forums for several years. I can't remember right now which company that was actually selling these solutions, but I believe it was already in the market.
Don't Help Your Kids With Their Homework
Homework is part of the learning process; helping with homework prevents the kid from doing them and, in the process, learning. Solving math problems for homework, as an example, is not because the teacher wants to know the final answer; it's because the teacher wants the student to confront a new type problem and, in the process of confronting it and guessing how to obtain the solution, learn. I give detailed solutions to the class problems to my undergraduate students (they would obtain them anyway, and in many cases with errors), but I always insist that looking at the solution should be the last resort if they don't know how to face the problem (solutions are intended for checking their own's).
The problem with external help (from the parents, typically) is that, in many cases, the parents get involved in excess and actually do the homework for their sons, so the teacher cannot find any error in the child's results. I know a case of a mother who was doing exactly this, because his kid didn't get very good grades (I was even asked for help in some work the child had to do for school with the computer!). The grades started getting worse and, three years later, the kid was in special education. I know this is not the only reason, but I'm confident it affects a lot.
How Do You Backup 20TB of Data?
The first step is to classify the data in two groups: what you would not want to lose at any cost, and the redundant data (movies, music, etc) that you could survive without. This is the most important step
The second step is to backup the important data using an external 1 TB drive, tape or similar.
Optionally, the third step is to delete the remaining 19 TB.
Scientific Data Disappears At Alarming Rate, 80% Lost In Two Decades
This problem occurs even for people in the same group, who often find problems to repeat the simulations from our own papers, and even as recent as one year ago. The problems typically come from people leaving (PhD finished, grants that expire, people that move to a different job), changes in the simulation tools, etc.
In our Computer Architecture research group we employ Mercurial for versioning the simulator code. Thus, we can know when each change was applied. For each simulation, we store both the configuration file that is used to generate that simulation (which also includes the Mercurial version of the code which is being used) and the simulation results, or at least only the interesting results. Multiple simulators allow for different verbosity levels, and in most cases most of the output is useless, so we typically store the interesting data (such as latency and throughput) because otherwise we would have no disk space.
Even with this setup, we often find problems trying to replicate the exact results of our own previous papers, for example because of poor documentation (this is typical in research, since homebrew simulation tools are not maintained as one would expect from commertial code), changes that introduce subtle effects, code that gets lost when some person leaves or simply large files that get deleted to save disk space (for example, simulation checkpoints or network traces, which are typically very large).
However, you typically do not need to look back and replicate results, so keeping all the data is a useless effort. I completely understand that research data gets lost, but I think that it is largely unavoidable.
IE Zero-Day Exploit Disappears On Reboot
Sure it dissapears!
Unless you're running IE as admin, you have UAC disabled and the malware has installed a hypervisor and you're hickjacked forever without having any chance to detect it. How long before we see that?
VLC Reaches 2.1
Yet still it does not support turning off the computer, despite being a feature requested for years. That's the ONLY missing feature which prevents it from being my default video player
LGPL H.265 Codec Implementation Available; Encoding To Come Later
CODEC: COder-DECoder; but there is no (en)COder here!
EFF Slams Google Fiber For Banning Servers On Its Network
all ISPs are deliberately vague about what qualifies as a 'server.' ... because TCP clearly specifies it.
The fact that some programs might behave correctly when implementing a server, or not (eg: skype) or the fact that, in some cases, ISPs allow certain services or ports, does not mean that a 'server' is something arcane. It's you that don't know it.
Computer Memory Can Be Read With a Flash of Light
The link to the actual Nature Communications paper is here: Non-volatile memory based on the ferroelectric photovoltaic effect.
This somehow resembles Phase-Change Memory (PCM). PCM devices are composed of a material which, under a high current, there is a thermal fusion and changes to a different material status, from amorphous to crystalline. This changes two properties: light reflectivity (exploited in CDs and DVDs) and electrical resistance (exploited in emerging non-volatile PCM memories). The paper cites PCM and other types of emerging non-volating memories.
In this case, it is the polarization what changes, without requiring a thermal fusion, therefore increasing the endurance of the device, one of the main shortcomings of PCM. The other main shortcoming of PCM is write speed due to the slow thermal process, in the paper they claim something like 10ns. If this can be manufactured with a large scale of integration and low cost, it will probably be a revolution in computer architecture.
Supercomputers At TACC Getting a Speed Boost
GB != Gbps.
Major Advance Towards a Proof of the Twin Prime Conjecture
"the existence of any finite bound, no matter how large, means that that the gaps between consecutive numbers don't keep growing forever"
Actually, I disagree with the unfortunate writing of the sentence. The gaps between consecutive prime numbers are variable, and on average they DO tend to keep growing forever. This is a widely known result, the density of prime numbers decreases as the numbers grow. However, since the gap between consecutive primes is variable and it does not follow a regular function (otherwise, it would be very easy to calculate prime numbers), even with a very low density of prime numbers we can find a pair of consecutive prime numbers with a gap of only 2.
The problem under study is not wether the gap between consecutive primes keeps growing forever (which is true only on average, considering a long secuence of integers), but wether there are infinite such pairs of primes with gap 2. The new result found says that there exist infinite pairs of primes with gap 70M or less. However, this does not imply at all that no consecutive pairs of primes with gap > 70M exist (which, in fact, they do).
ZFS Hits an Important Milestone, Version 0.6.1 Released
Why not labeling it 1.0? Looks like it is still in beta...
UK Researchers Build Micron LED Light Based Wireless Network
C'on, this is Slashdot. Is it so complex to say that they employ an NRZ modulation using a light carrier, rather than "a bit like Morse Code from a torch"? Is it so difficult to refere to the switching/modulation frequency, or baud rate, rather than "they can also flicker on and off around 1,000 times quicker than the larger LEDs"?
The idea of using a LED light for communication is presented as a novelty in the summary, when all remotes work this way, and even the original 802.11 specs included a PHY layer that relied on IR. You are trying to make articles more dumb-user-friendly, but what you are getting is to kick out the users that might make valuable comments.
"Bill Shocker" Malware Controls 620,000 Android Phones In China
This is NOT a virus; viruses infect a system, typically by modifying other existan executable files, and then self-replicate themselves. These are malware applications which have been installed by the users. In this case he notice, not covered in the summary, is that these applications are not designed to be malware, but rather they employ a free (as in gratis) SDK, which converts the phone in a zombie.
However, note that simply removing the applications should remove the "infection". The Android security model does not allow an application to "infect" the OS, unless the user has rooted the phone and runs the application as root (in this case, it's your fault).
Einstein@Home Set To Break Petaflops Barrier
If it was at position 24 in the Top500, it would likely be 3x as power-efficient than having all these individual computers. These sort of initiatives are impressively inefficient (but very effective), this is why the 'cloud' model won the battle over the 'grid' model. It only works because computing power is donated, not paid for. On the other hand, the equivalent supercomputer would likely cost 3-8x the aggregate (wrt the sum of costs of all these computers), because of it being custom-made.
Automation Is Making Unions Irrelevant
Repeat with me: Automation is good. It makes we, human kind, more productive. With the same human work, we can get more benefits for ourselves, so on average our wealth improves. The people that do not need to do manual and repetitive jobs can move to a more creative work which produces more benefit for mankind. Gutenberg's printing was good. e-mail was good, despite removing works in the Post office. Hydraulic excavators are good. And all of them reduce the number of jobs, and unions cannot and shouldn't try to prevent this. Fortunately, we are no longer relying on picks and shovels to dig tunnels.
The problem is not with automation, which is good for mankind as a whole; the problem is with the distribution of wealth. We are facing a serious problem, in which those who have the machines (capital) become much richer by producing the same as before, and those that lose their employments become poorer. I certainly believe that this problem will aggravate with time, as more jobs are out-dated by technology, and "the system" cannot provide an alternative way to earn a living.
One option might be to move to a system in which everyone has a basic "social earning", enough for a living, while those with a work would earn more money. However, this imposes serious trouble, such as obvious abuse and unfairness. I see the problem, but I don't foresee a clear solution.
GLIBC 2.16 Brings X32 Support, ISO C11 Compliance, Better Performance
Note that, on multiple processors, the legacy x86 and the x64 implementations are (almost completely) separated, using different processor resources. Between the larger/better resources, and the higher number of register, the x64 pipeline gets a better performance in the same processor. The lower memory usage also helps to improve performance, but its impact is minor.
Samsung TVs Can Be Hacked Into Endless Restart Loop
The vulnerability is originally disclosed here, not in the posted link.
This vulnerability only works from the same broadcast domain where the TV is, since the remote control protocol relies on broadcast messages to announce the service. This means that your TV cannot be cracked from the Internet. Let's hope that Samsung apply a fix soon, in any case.