Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Comments

top

Research Shows RISC vs. CISC Doesn't Matter

enriquevagu Original sources (161 comments)

It is really surprising that neither the linked Extremetech article, nor the slashdot summary cite the original source. This research was presented in HPCA'13 in a paper titled "Power Struggles: Revisiting the RISC vs. CISC Debate on Contemporary ARM and x86 Architectures", by Emily Blem et al, from the University of Wisconsin's Vertical Research group, led by Dr. Karu Sankaralingam. You can find the original conference paper in their website.

The Extremtech article indicates that there are new results with some additional architectures (MIPS Loongson and AMD processors were not included in the original HPCA paper), so I assume that they have published an extended journal version of this work, which is not yet listed in their website. Please add a comment if you have a link to the new work.

I do not have any relation with them, but I knew the original HPCA work.

about three weeks ago
top

Research Shows RISC vs. CISC Doesn't Matter

enriquevagu Re:isn't x86 RISC by now? (161 comments)

This is why we use the terms "Instruction Set Architecture" to define the interface to the (assembler) programmer, and "microarchitecture" to refer to the actual internal implementation. ISA is not bullshit, unless you confuse it with the internal microarchitecture.

about three weeks ago
top

Errata Prompts Intel To Disable TSX In Haswell, Early Broadwell CPUs

enriquevagu Problem and possible alternatives (131 comments)

This is a real pity for the TM community. This is not the first chip with transactional memory support in hardware: The Sun Rock was announced to have hardware TM support, and the IBM Blue Gene/Q Compute chip also supports it. Unlike other proposals for unbounded transactional memory, all these systems employ Hybrid Transactional Memory (ref, ref, ref), in which restricted hardware transactions are designed to correctly coexist with unbounded software transactions, so a software transaction can be started in case a hardware transaction fails for some unavoidable issue (such as lack of cache size or associativity to hold speculative data from the transaction, not because of a conflict). Note that, in any case, very large transactions should arguably be very uncommon, since they would significantly reduce performance (similar to very large critical sections protected by locks).

The problem with the hardware implementation of transactional memory is that they are not simply a new set of instructions which are independent from the rest of the processor. HTM implies multiple aspects, including multiversioning caching for speculative data; allowing for the commit of speculative (transactional) instructions, which could be later rolled back (note that in any other speculative operation such as instructions after branch prediction, the speculation is always resolved before instruction commits because the branch commits earlier); a tight integration with the coherence protocol (see LogTM-SE for an alternative to this very last issue, but still...); a mechanism to support atomic commits in presence of coherence invalidations... From the point of view of processor verification, this is a complete nightmare because these new "extensions" basically impact the complete processor pipeline and coherence protocol, and verifying that every single instruction and data structure behaves as expected in isolation does not guarantee that they will operate correctly in presence of multiple transactions (and non-transactional conflicting code) in multiple cores. There are some formal studies such as this or this, and the IBM people discuss the verification of their Blue Gene TM system in this paper (paywalled).

As some others commented before, the nature of the "bug" has not been disclosed. However, since it seems to be easy to reproduce systematically, I would expect it to be related to incorrect speculative data handling in a single transaction (or something similar), rather than races between multiple transactions.

Regarding the alternatives, Intel cannot simply remove these instructions opcodes because previous code would fail. I assume that the patch will make all hardware transactions fail on startup, with an specific error (EAX bit 1 indicates if the transaction can succeed on a retry; setting this flag to 0 should trigger a software transaction). In such case, execution continues at the fallback routine indicated in the XBEGIN instruction, which should begin a software transaction. Effectively, this will be similar to a software TM (STM) with additional overheads (starting the hardware transaction and aborting it; detecting conflicts with nonexistent hardware transactions) that would make it slower than a pure STM implementation.

about a month ago
top

Gmail Now Rejects Emails With Misleading Combinations of Unicode Characters

enriquevagu They are right - Uses of unicode ambiguous letters (79 comments)

They are right doing so. There are letters in different alphabets whose typing is very very similar -- or in fact they are written exactly the same, depending on the font used.

This can be exploited for interesting uses. For example, "E" and "ÃZ"** are respectively the latin "e" and the greek "epsilon" vowels, but they are indistinguishable in caps, at least in Arial font. The second one is the UTF 395 code. My name has an "E" on it, and for my email signature I spell my name using the traditional latin letter from the keyboard when the email is important and should be archived. By contrast, when the email is mostly irrelevant for future use (such as meeting arrangement emails, which are useless after the meeting takes place) I spell my name using the Greek epsilon letter (hint: 395 followed by Alt+X in most Windows programs). There is no obvious difference for the receiver, but a search tool can be used to quickly find all sent emails which can be deleted safely.

While the previous is a somehow "legit" use, in general any word which combines letters from different alphabets could be used to confuse an trick the receiver, for example by creating an email account which reads exactly the same as the one from another person. There is a nice image of 5 letters a-b-c-d-e in different alphabets in the linked post. I agree with Google in preventing such combinations for email accounts. It would be interesting to know the exact policy used to forbid account names, which is not detailed.

** At the time of writing, these two letters look exactly the same. Classic Slashdot lacks Unicode support and does not represent the greek Unicode letter from my comment. I tried logging into Slashdot Beta (first time, I swear it!!) and it seems to represent a different letter... Please try this on your own computer!

about a month ago
top

Half of Germany's Power Supplied By Solar, Briefly

enriquevagu Gigawatts per hour (461 comments)

Note that gigawatts are power units; gigawattshour are energy units and gigawatts per hour is wrong and misleading. I would expect that the editor would correct such basic mistakes, even tough they come from the linked article.

about 3 months ago
top

Researchers Unveil Experimental 36-Core Chip

enriquevagu Re:Is there anything new here? (143 comments)

Some knowledge about multicore cache coherence here. You are completely right, Slashdot's summary does not introduce any novel idea. In fact, a cache-coherent mesh-based multicore system with one router associated to each core was presented on the market years ago by a startup from MIT, Tilera. Also, the article claims that today's cores are connected by a single shared bus -- that's far outdated, since most processors today employ some form of switched communication (an arbitrated ring, a single crossbar, a mesh of routers, etc).

What the actual ISCA paper presents is a novel mechanism to guarantee total ordering on a distributed network. Essentially, when your network is distributed (i.e., not a single shared bus, basically most current on-chip network) there are several problems with guaranteeing ordering: i) it is really hard to provide a global ordering of messages (like a bus) without making all messages cross a single centralized point which becomes a bottleneck, and ii) if you employ adaptive routing, it is impossible to provide point-to-point ordering of messages.

Coherence messages are divided in different classes in order to prevent deadlock. Depending on the coherence protocol implementation, messages of certain classes need to be delivered in order between the same pair of endpoints, and for this, some of the virtual networks can require static routing (e.g. Dimension-Ordered Routing in a mesh). Note a "virtual network" is a subset of the network resources which is used by the different classes of coherence messages to prevent deadlock. This is a remedy for the second problem. However, a network that provided global ordering would allow for potentially huge simplifications of the coherence mechanisms, since many races would disappear (the devil is in the details), and a snoopy mechanism would be possible -- as they implement. Additionally, this might also impact the consistency model. In fact, their model implements sequential consistency, which is the most restrictive -- yet simple to reason about -- consistency model.

Disclaimer: I am not affiliated with their research group, and in fact, I have not read the paper in detail.

about 3 months ago
top

WikiLeaks: NSA Recording All Telephone Calls In Afghanistan

enriquevagu Technical details? (241 comments)

Just curiosity... The bandwidth required to do this should be enormous, how did they implement it? Are the trunk switches compromised and they locally record every conversation, and later send it to the USA? Did they install dedicated fibers to do this? TFA lacks any details.

Thanks.

about 4 months ago
top

New Cologne Answers the Question: "What Does a Bitcoin Smell Like?"

enriquevagu Free advertisement (61 comments)

Invent anything you want to sell, name it after bitcoin and you receive free advertisement in Slashdot front page!

about 4 months ago
top

Intel and SGI Test Full-Immersion Cooling For Servers

enriquevagu Re:I doubt it (102 comments)

This post is incomplete because of problems with html markings. Please see the complete post below (and mod this one down!)

about 5 months ago
top

Intel and SGI Test Full-Immersion Cooling For Servers

enriquevagu I doubt it (102 comments)

(sorry for the duplicated posting; the previous one was cut because of problems with the html marks)

In order to obtain a 90% reduction in the energy bill, cooling must account for 90% of the power of the DC. This implies a PUE >= 10. As a reference, 5 years ago virtually any DC had a PUE lower than 3. Nowadays, PUE lower than 1.15 can be obtained easily. As a referecence, Facebook publishes the instantaneous PUE of one of its DC in Prineville, which at the moment is 1.05. This implies that any savings in cooling would reduce the bill, at much, in a factor of 1.05 (1/1.05 = 0.9523).

On the other hand, I believe that this is not the first commertial offer for a liquid-cooled server, Intel was already considering two years ago, and the idea has been discussed in other forums for several years. I can't remember right now which company that was actually selling these solutions, but I believe it was already in the market.

about 5 months ago
top

Intel and SGI Test Full-Immersion Cooling For Servers

enriquevagu I doubt it (102 comments)

In order to obtain a 90% reduction in the energy bill, cooling must account for 90% of the power of the DC. This implies a PUE >= 10. As a reference, 5 years ago virtually any DC had a PUE instantaneous PUE of one of its DC in Prineville, which at the moment is 1.05. This implies that any savings in cooling would reduce the bill, at much, in a factor of 1.05 (1/1.05 = 0.9523).

On the other hand, I believe that this is not the first commertial offer for a liquid-cooled server, Intel was already considering it two years ago, and the idea has been discussed in other forums for several years. I can't remember right now which company that was actually selling these solutions, but I believe it was already in the market.

about 5 months ago
top

Don't Help Your Kids With Their Homework

enriquevagu Homework are part of the learning process (278 comments)

Homework is part of the learning process; helping with homework prevents the kid from doing them and, in the process, learning. Solving math problems for homework, as an example, is not because the teacher wants to know the final answer; it's because the teacher wants the student to confront a new type problem and, in the process of confronting it and guessing how to obtain the solution, learn. I give detailed solutions to the class problems to my undergraduate students (they would obtain them anyway, and in many cases with errors), but I always insist that looking at the solution should be the last resort if they don't know how to face the problem (solutions are intended for checking their own's).

The problem with external help (from the parents, typically) is that, in many cases, the parents get involved in excess and actually do the homework for their sons, so the teacher cannot find any error in the child's results. I know a case of a mother who was doing exactly this, because his kid didn't get very good grades (I was even asked for help in some work the child had to do for school with the computer!). The grades started getting worse and, three years later, the kid was in special education. I know this is not the only reason, but I'm confident it affects a lot.

about 5 months ago
top

How Do You Backup 20TB of Data?

enriquevagu Classification, first step (983 comments)

The first step is to classify the data in two groups: what you would not want to lose at any cost, and the redundant data (movies, music, etc) that you could survive without. This is the most important step

The second step is to backup the important data using an external 1 TB drive, tape or similar.

Optionally, the third step is to delete the remaining 19 TB.

about 6 months ago
top

Scientific Data Disappears At Alarming Rate, 80% Lost In Two Decades

enriquevagu Our approach in our research group (189 comments)

This problem occurs even for people in the same group, who often find problems to repeat the simulations from our own papers, and even as recent as one year ago. The problems typically come from people leaving (PhD finished, grants that expire, people that move to a different job), changes in the simulation tools, etc.

In our Computer Architecture research group we employ Mercurial for versioning the simulator code. Thus, we can know when each change was applied. For each simulation, we store both the configuration file that is used to generate that simulation (which also includes the Mercurial version of the code which is being used) and the simulation results, or at least only the interesting results. Multiple simulators allow for different verbosity levels, and in most cases most of the output is useless, so we typically store the interesting data (such as latency and throughput) because otherwise we would have no disk space.

Even with this setup, we often find problems trying to replicate the exact results of our own previous papers, for example because of poor documentation (this is typical in research, since homebrew simulation tools are not maintained as one would expect from commertial code), changes that introduce subtle effects, code that gets lost when some person leaves or simply large files that get deleted to save disk space (for example, simulation checkpoints or network traces, which are typically very large).

However, you typically do not need to look back and replicate results, so keeping all the data is a useless effort. I completely understand that research data gets lost, but I think that it is largely unavoidable.

about 9 months ago
top

IE Zero-Day Exploit Disappears On Reboot

enriquevagu Dissapears on reboot... (103 comments)

Sure it dissapears!

Unless you're running IE as admin, you have UAC disabled and the malware has installed a hypervisor and you're hickjacked forever without having any chance to detect it. How long before we see that?

about 10 months ago
top

VLC Reaches 2.1

enriquevagu Turning off the computer (127 comments)

Yet still it does not support turning off the computer, despite being a feature requested for years. That's the ONLY missing feature which prevents it from being my default video player

about a year ago
top

LGPL H.265 Codec Implementation Available; Encoding To Come Later

enriquevagu Decoder, not codec (141 comments)

CODEC: COder-DECoder; but there is no (en)COder here!

1 year,16 days
top

EFF Slams Google Fiber For Banning Servers On Its Network

enriquevagu Server description (301 comments)

all ISPs are deliberately vague about what qualifies as a 'server.' ... because TCP clearly specifies it.

The fact that some programs might behave correctly when implementing a server, or not (eg: skype) or the fact that, in some cases, ISPs allow certain services or ports, does not mean that a 'server' is something arcane. It's you that don't know it.

about a year ago
top

Computer Memory Can Be Read With a Flash of Light

enriquevagu Comparison to PCM (69 comments)

The link to the actual Nature Communications paper is here: Non-volatile memory based on the ferroelectric photovoltaic effect.

This somehow resembles Phase-Change Memory (PCM). PCM devices are composed of a material which, under a high current, there is a thermal fusion and changes to a different material status, from amorphous to crystalline. This changes two properties: light reflectivity (exploited in CDs and DVDs) and electrical resistance (exploited in emerging non-volatile PCM memories). The paper cites PCM and other types of emerging non-volating memories.

In this case, it is the polarization what changes, without requiring a thermal fusion, therefore increasing the endurance of the device, one of the main shortcomings of PCM. The other main shortcoming of PCM is write speed due to the slow thermal process, in the paper they claim something like 10ns. If this can be manufactured with a large scale of integration and low cost, it will probably be a revolution in computer architecture.

about a year ago
top

Supercomputers At TACC Getting a Speed Boost

enriquevagu Units (14 comments)

GB != Gbps.

about a year ago

Submissions

enriquevagu hasn't submitted any stories.

Journals

enriquevagu has no journal entries.

Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>