×

Announcing: Slashdot Deals - Explore geek apps, games, gadgets and more. (what is this?)

Thank you!

We are sorry to see you leave - Beta is different and we value the time you took to try it out. Before you decide to go, please take a look at some value-adds for Beta and learn more about it. Thank you for reading Slashdot, and for making the site better!

Comments

top

Windows Cluster Hits a Petaflop, But Linux Retains Top-5 Spot

quo_vadis Interesting (229 comments)

It is interesting that there are 6 new entrants in the top 10. Even more interesting is the fact that GPGPU accelerated supercomputers are clearly outclassing classical supercomputers such as Cray. I suspect we might be seeing something like a paradigm shift, such as when people moved from custom interconnect to GbE and infiniband. Or when custom processors began to be replaced by Commercial Off The Shelf processors.

more than 4 years ago
top

Ray Kurzweil Does Not Understand the Brain

quo_vadis I might be way off here.... (830 comments)

Let me preface my statements with the following disclaimer IANAB (I am not a biologist/biochemist)

That said, the problem of reconstructing a brain from DNA is something like trying to understand a self modifying genetic algorithm containing multiple parallel automata. To explain, I am going to conflate a couple of concepts. Self modifying code is reasonably well known. Consider a system where the hardware is an FPGA (i.e. can be reconfigured on the fly) and the program running on it a mix of a boot loader, independent hardware accelerated automata/agent programs, and some kind of feedback. The program contains an initial boot loader to load some data onto the FPGA, set up some accelerators and the capability to reprogram the FPGA. Then, it loads up some small agents, and some feedback controls. These agents run in parallel for a while, reconfiguring the hardware and/or the software of other agents or groups of agents, while the feedback control allows the minor selective mutation (through say bit stream corruption) of the programming. Some of the interactions of well definied automata are clear, but mutated automata interact in new and therefore unmodeled ways. The end result is the brain.

To sum it up, the DNA is just a small piece of the self modifying base code for the first initialization of the FPGA. The way the final FPGA is mapped depends on environmental factors (eg. which agent fired first, how did selection happen, small biases arising from the physical nature of the FPGA being propagated to wild changes in the end result). Thus, modeling just the base pairs is not sufficient as the interactions of the automata from the base pairs must be modeled as well.

more than 4 years ago
top

How Much Smaller Can Chips Go?

quo_vadis Re:Why do they need to? (362 comments)

Um, actually Intel has done a lot of work on the architecture and microarchitecture of its processors. The CPUs Intel makes today are almost RISC like, with a tiny translation engine, which thanks to the shrinking size of transistors takes a trivial amount of die space. The cost of adding a translation unit is tiny, compared to the penalty of not being compatible with a vast majority of the software out there.

Itanium was their clean room redesign, and look what happened to it. Outside HPCs and very niche applications, no one was willing to rewrite all their apps, and more importantly, wait for the compiler to mature on an architecture that was heavily dependent on the compiler to extract instruction level parallelism.

All said, the current instruction set innovation is happening with the SSE, and VT instructions, where some really cool stuff is possible. There is something to be said for the choice of CISC architecture by Intel. In RISC ones, once you run out of opcodes, you are in pretty deep trouble. In CISC, you can keep adding them,making it possible to have binaries that can run unmodified on older generation chips, but able to take advantage of newer generation features when running on newer chips.

more than 4 years ago
top

How Much Smaller Can Chips Go?

quo_vadis Re:Don't make them smaller (362 comments)

You are incorrect about the reason for lack of 3D stacking. Its not that we cant stack them. There has been a lot of work on it. In fact, the reason flash chips are increasing in capacity is because they are stacked usually 8 layers high. The problem quite simply is heat dissipation. A modern CPU has a TDP of 130W, most of which is removed from the top of the chip, through the casing, to the heatsink. Put a second core on top of it, and the bottom layer develops hotspots that cannot be handled. There are currently some approaches based on microfluidic channels interspersed between the stacked dies, but that has its own drawbacks.

more than 4 years ago
top

Will Ballmer Be Replaced As Microsoft CEO?

quo_vadis Re:Are you serious? (342 comments)

Not quite. MSFT did a 2:1 split in 2002, so each share was effectively diluted by 50%. If you invested in a 100 shares of MSFT in 2000, your valuation is almost the same now as it was then (well up by ~2%). The problem is that accounting for inflation, and given the performance of NASDAQ over the decade, your money would have been better invested elsewhere.

more than 4 years ago
top

Half of Windows 7 Machines Running 64-Bit Version

quo_vadis Re:Artificial limits R US (tm) (401 comments)

Are you sure about this ? My computer architecture is a little rusty, but let me see .. what you are saying is that you would need 2^30 RAM sticks of 16GB capacity to fill up the full 2^64 space. This contention is fine. The problem is with the second part. The only way you would look at the complete delay is if you had sequential accesses. If you had a partitioned hierarchy, it is fine. Additionally, the TLB might be large, but addressing the ram can happen orders of magnitude faster (assuming partitioning happens on basis of 5 bits, worst case,you only look into a bank of 2^6 sticks) which is not that bad. Of course, access time is not uniform, but this problem has been addressed previously (see NUMA systems). As such, a full multi exabyte system is hard to design, but with increasing memory densities, it may become feasible, using techniques that are currently applied to supercomputer memory hierarchies.

more than 4 years ago
top

Toshiba To Test Sub-25nm NAND Flash

quo_vadis A litho primer (80 comments)

For those unfamiliar with the field of semiconductor design, heres what the sizes mean. The Toshiba press release is about flash. In flash, the actual physical silicon consists of rectangular areas of silicon that have impurities added (aka. doped regions or wells). On top of these doped regions, are thinner parallel "wires" (narrower rectangles) made of poly silicon. The distance between the leading edge of wire and the next is called the pitch. Thus, the half pitch is half that distance. The reason this is important is that half pitch is usually the width of the polysilicon wire and effectively becomes the primary physical characteristic from the point of view of power consumption (leakage), speed and density.

The official roadmap for processes and feature sizes (called process nodes) are published yearly by the International Technology Roadmap for Semiconductors, a consortium of all the fabs. According to the 2009 lithography report. 25nm Flash is supposed to hit full production in 2012, thus inital deployments happen a couple of years before. Effectively Toshiba seems to be hitting the roadmap.

The takeaway being, theres nothing to see here, its progress as usual. The big problem is what happens under 16nm. Thats the point at which current optical lithography is impossible, even using half or quarter wavelength, and EUV with immersion litho.

more than 4 years ago
top

Freescale's Cheap Chip Could Mean Sub-$99 E-Readers

quo_vadis Really wont change the price (158 comments)

According to isuppli's teardown of the kindle the E Ink display is $60. The main processor (made by Freescale) is ~$8. The EPD chip, which is what becomes redundant adds only $4.31 to the BOM. The main point is you cannot expect E Ink based readers to get any cheaper any time soon. Any price cuts will only come about due to increased competition from different technologies like Pixel Qi's, or by sacrificing things like onboard wireless (which adds ~$40 to the cost of the Kindle).

more than 4 years ago
top

Western Digital Launches First SSD

quo_vadis Price / Perfomance Question (163 comments)

Here is a link to the review of the disk over at anandtech. Interestingly, it seems this drive will not be using one of the higher performance SSD controllers (Sandforce / Indilinx), so the performance should be worse than other competitors. If the price is as predicted (128 GB @ $529), then this drive wont make much sense compared to faster drives from OCZ etc

more than 4 years ago
top

Here We Go Again — Video Standards War 2010

quo_vadis Much Ado about nothing (292 comments)

The TFA talks about the war between Digital Entertainment Content Ecosystem (DECE) from 6 of the big movie studios versus Keychest from Disney. But the important this is that Keychest is not DRM . As the name implies its a Key management service, proposed by Disney. It needs DRM such as DECE or Apple's Protected AAC stuff to work. The TFA's author doesnt seem to grasp the basic difference.

more than 4 years ago
top

Intel CPU Privilege Escalation Exploit

quo_vadis Wow (242 comments)

Very interesting loophole. For those too lazy to read TFA, basically this attack allows someone running as root (or in some cases as a local user) to run code at a level that even hypervisors cant deal with. To put this into perspective, if you are running some big iron hardware with a dozen virtualized servers. With a local privilege escalation exploit on one VM, an attacker could use this attack to take over the whole system, even the secured VMs. Worst problem is that it would be undetectable. No VM, and no hypervisor would be able to see it. Any AV call can be intercepted as the SMM has the highest priority in the system.

The solution on the other hand seems pretty simple. Make the chipset block writes to the TSEG for the SMRAM in hardware (by disabling those lines) and use some extra hardware to prevent those lines from being loaded into cache. Finally, make every bios SMRAM update contain a parity and create tools that allow SMRAM parity check.

more than 5 years ago
top

Long-Term PC Preservation Project?

quo_vadis Get a netbook (465 comments)

The best thing to do would be to ensure your entire system was self sufficient to some degree (i.e. display, OS, input devices were fixed). A netbook would be the perfect low cost solution. Just get an eeePc with a 4/8G hard disk, set up with some slideshow to start on boot and store that. To ensure you dont wind up with the problem of bad flash hard disks, either make a few copies on SD cards, or get a ROM based hdd, burned with a system image. That way when people open it up, there wont be issues of how to connect it to a working monitor/keyboard etc. Just plug in battery and press power button.

more than 5 years ago
top

Linus Switches From KDE To Gnome

quo_vadis I agree. Kde4 has issues (869 comments)

I think Linus is right on this one. I have been using KDE based linux desktops on my primary computer for ~7 years now. KDE 4 is a huge step back. The even bigger problem is that linux distros (Kubuntu and OpenSuse) are happily pushing KDE4.1 as the default KDE desktop. In fact with Kubuntu 8.10, there is no option. For KDE 3.5 you have to use 8.04. KDE 4 takes the GNOME approach to desktops (i.e. user's IQ is equivalent to a mostly dead rodent of unusually small size and any options would confuse poor afore mentioned user and therefore options are bad). Before the GNOME loving flames begin, yes I know there exist external tools to start fiddling with options, but the amount of flexibility is not the same as KDE 3.5.10.

KDE 4 unfortunately takes the GNOME approach, and removes flexibility. Worse still, all the developer time for KDE 4 is now going into polishing the interface (which while shiny is no better or more intuitive than KDE 3.5) while not bothering fixing apps people actually use. For example, on KDE 4.2, if you add a webdav calendar from a https source which has a self signed cert, you will be prompted every time it reloads, whether you want to accept the cert or not. Yes thats right, even if you click accept cert permanently, the DE is incapable of understanding it. This has been outstanding for a while, but all recent activity seems to be towards fixing desktop effects or making the kicker work. Its ridiculous.

/rant

more than 5 years ago
top

Generational Windows Multicore Performance Tests

quo_vadis Re:The Money Quote (228 comments)

I have never been a "windows fanboi"( In fact this is being posted from a linux computer) and I am no defender of Microsoft's business practices. However without doing code analysis, it is impossible to say that this slowdown is because of DRM. Nowhere in the article does it suggest that they were able to do a profile analysis of the kernel codes and compare what modules on the path were causing the delays. So while it is theoretically possible(and likely) that the source of the delay was DRM related, one cannot be sure. If you possess knowledge otherwise, please feel free to cite it and correct me.

more than 5 years ago
top

Generational Windows Multicore Performance Tests

quo_vadis Re:And Windows XP is still faster (228 comments)

XP is still faster by a large margin(20% to 40% depending on load scenario). FTFA

If you take the raw transaction times for the database and workflow tasks, then factor them against the average processor utilization for these same workloads, you see that Windows XP consumes roughly 7.2 and 40.7 billion CPU cycles, respectively, to complete a single pass of the database and messaging workflow transaction loops on our quad-core test bed. By contrast, Windows Vista takes 10.4 and 51.6 billion cycles for each workload, while Windows 7 consumes 10.9 and 48.4 billion cycles. Translation: On quad-core, the newer operating systems are at least 40 percent less efficient than XP in the database tasks and roughly 20 percent less efficient in the workflow tasks.

more than 5 years ago
top

Generational Windows Multicore Performance Tests

quo_vadis Interesting (228 comments)

It is interesting that WinXP is still better in terms of performance than either. The article suggests that Win7 and Vista would be better on systems that hypothetically had 16+ cores.

But nowadays, especially in tech savvy crowds like on /., the most popular thing to do is run VMs with virtual instances of Windows, which reduces all the hassles associated with dealing with win cruft. Got a worm? restore machine. Drivers made system unstable? restore machine. The vms are typically only given 1-2 cores, the exact use case where WinXP does way better than its successors.

So even if we move to a world with 16+ core processors, if Win7 cannot do better than a 10 year old OS, in common scenarios, how can that be called progress?

more than 5 years ago
top

What Programming Language For Linux Development?

quo_vadis I would say ... (997 comments)

I suggest C/C++ and Tcl/Tk. Together, they are quite powerful and Tk GUIs also work in Windows. I know Python tends to be the scripting language of choice here on /. but Tcl is quite widely used and if you are working in a field which has Tcl legacy systems (like EDA tools) then learning Tcl will give you a huge boost. Other than that, Perl for command line scripting might also be handy.

about 6 years ago
top

World's First "Unclonable" RFID Chip

quo_vadis Re:A short primer on PUFs (320 comments)

I realize its bad form to reply to my comment, but I would like to add a bit about how authentication works using PUFs

When the chip is manufactured, the device creator records the original response of the chip to a series of challenges and calls this reponse vector r'. When a chip is powered up, it energizes the PUF circuitry and records the output into the internal PUF value register(k). Next, when the chip (usually a passive RFID) needs to be authenticated, the external party sends a challenge. The challenge (c) is processed through some encryption mechanism (called f() )using the key (the saved PUF register value) to produce a response(r).(For those keeping track at home, r = f(c,k)). This response is sent back to external party. The external party sends n such requests and compares the received response vector to the expected response vector (r') if r and r' are the same, then the chip is authenticated and work continues.

Of course, like any normal physical phenomenon, there is some variation between any two power ups. Thus, the key might change. In order to compensate for this, the key is calculated to be the codeword of some code with a long length. Then, for each subsequent power up, the new key(k') is decoded using nearest neighbor decoding as a codeword of the same code. Finally, the distance of the new key(k') and the expected key(k) is stored into a special vector(l), which is reapplied to key produced at next power up.

So, to clear up a few questions -
1. Its not like OTP (one time pad) encoding, because a unique challenge should produce a given response for a unique chip every time
2. It is not meant to be the only encryption being used. There is usually a second code on the set of challenges to ensure that the challenge vector being created is itself part of a code.
3. Man in the Middle & duplication attacks should be hard as the device manufacturer can release a small subset of real challenges and could always hold back some challenges, which it can use to be completely sure. Additionally, it may release different sets of challenges to different customers.

more than 6 years ago

Submissions

top

Windows coming to ARM

quo_vadis quo_vadis writes  |  more than 3 years ago

quo_vadis (889902) writes "At CES today, Microsoft announced full blown windows coming to ARM. This is a very Apple like move for Microsoft, but without the whole "oh we had this running for 5 years before releasing it". Sounds like we are in for driver incompatibilities a million times worse than the Vista transition. Even worse, given that Windows biggest selling point is legacy application compatibility, requiring all third party applications to be recompiled negates the advantages of a legacy compatible version of windows. Finally, the lack of a strong infrastructure for supporting the transition (universal binaries, system library management) point to the transition being a painful one."

Journals

top

Windows coming to ARM

quo_vadis quo_vadis writes  |  more than 3 years ago

At CES today, Microsoft announced full blown windows coming to ARM. This is a very Apple like move for Microsoft, but without the whole "oh we had this running for 5 years before releasing it". Sounds like we are in for driver incompatibilities a million times worse than the Vista transition. Even worse, given that Windows biggest selling point is legacy application compatibility, requiring all third party applications to be recompiled negates the advantages of a legacy compatible version of windows. Finally, the lack of a strong infrastructure for supporting the transition (universal binaries, system library management) point to the transition being a painful one.

top

A short Primer on Physically Unclonable Functions (PUFs) - I

quo_vadis quo_vadis writes  |  more than 6 years ago PUFs (so called Physically Unclonable Functions) are currently a hot topic of research, especially in the secure embedded computing community.

The fundamental idea is that a PUF should produce a unique value for a chip, in a repeatable fashion, with a side effect that modification of the chip will be detectable.

PUFs are of 4 main types -
1. Optical - These are the oldest forms of PUFs. They started with physicists trying to use chips as diffraction gratings. You shine a laser at the silicon vias and record the signature of light. These require depackaging the chip in question and are mostly impractical

2. Silicon - Usually implemented as long delay lines, but are sensitive to environmental conditions (mainly temperature & injected faults) There remains an ongoing research attempt to make these better (less reliant on environmental factors)

3. Coating - These are currently considered one of the best forms of PUFs. The topmost layer of the chip has some embedded metal flakes. The bottom layer of the chip has a capacitance sensor. Since the distribution of the metal flakes is random, the capacitance is random and unique to each chip (the resolution of the capacitance sensor is tuned to ensure this). This method has the added advantage that the minute someone tries to attack the chip, by depackaging it, the capacitance changes and the chips data (usually the secret key for an encryption cipher such as AES/DES) can be wiped. The main problem is that it adds a few extra fab steps , which means it increases the cost. Additionally, the first calibration costs more money to do.

4. Intrinsic - These are the current area of research. In particular for FPGAs. As any hardware designer knows, RAM cells are initalized to random values, but most FPGAs have some small logic which resets them all to zero. If we remove that logic, we have a chip, which has a whole bunch of random numbers, which will usually initialize the same way, based on process variation etc. This technique has been shown for FPGAs and will probably be brought over soon to full scale chips.

When the chip is manufactured, the device creator records the original response of the chip to a series of challenges and calls this reponse vector r'. When a chip is powered up, it energizes the PUF circuitry and records the output into the internal PUF value register(k). Next, when the chip (usually a passive RFID) needs to be authenticated, the external party sends a challenge. The challenge (c) is processed through some encryption mechanism (called f() )using the key (the saved PUF register value) to produce a response(r).(For those keeping track at home, r = f(c,k)). This response is sent back to external party. The external party sends n such requests and compares the received response vector to the expected response vector (r') if r and r' are the same, then the chip is authenticated and work continues.

Of course, like any normal physical phenomenon, there is some variation between any two power ups. Thus, the key might change. In order to compansate for this, the key is stored as a codeword of some code with a long length. Then, for each subsequent power up, the new key is decoded using nearest neighbor decoding as a codeword of the same code. Finally, the distance of the new key and the expected key is stored into a special vector, which is reapplied to key produced at next power up.

Slashdot Login

Need an Account?

Forgot your password?