Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Yale Physicists Measure 'Persistent Current'

Soulskill posted more than 4 years ago | from the current-events dept.

Science 68

eldavojohn writes "Modern processors rely on wires mere nanometers wide, and now Yale physicists have successfully measured a theoretical 'persistent current' that flows through them when they are formed into rings. The researchers predict this will help us understand how electrons behave in metals — more specifically, the quantum mechanical effect that influences how these electrons move through the metals. Hopefully, this work will shed new light on what dangers (or uses) quantum effects could have on classical processors as the inner workings shrink in size. The breakthrough involved rethinking how to measure this theoretical effect, as they previously relied on superconducting quantum interference devices to measure the magnetic field such a current would create — complicated devices that gave incorrect and inconsistent measurements. Instead, they turned to nothing but mechanical devices, known as cantilevers ('little floppy diving boards with the nanometer rings sitting on top'), that yielded measurements with a full order of magnitude more precision."

cancel ×

68 comments

Sorry! There are no comments related to the filter you selected.

It's the little things that impress (1)

rcolbert (1631881) | more than 4 years ago | (#29703491)

I'm not quite sure what the application would be for persistent current, although my wife might have some ideas on the subject. In any case, I'm always amazed at folks who can work and innovate at such a small scale. It's like they could build a model ship in a bottle wearing boxing gloves.

Re:It's the little things that impress (3, Funny)

Jurily (900488) | more than 4 years ago | (#29703781)

I'm not quite sure what the application would be for persistent current, although my wife might have some ideas on the subject.

I'm sorry for your wife.

Re:It's the little things that impress (4, Interesting)

Bat Country (829565) | more than 4 years ago | (#29703877)

I had an EE teacher who owned his own little company. His company had done some research using GA [wikipedia.org] to evolve a minimal adder circuit on an FPGA [wikipedia.org] . This adder was simpler than the theoretically optimal adder circuit, using fewer gates than should be possible.

They really thought they had something (it worked every time with no apparent variation on real hardware) and started putting it on a few other FPGAs to test the solution. It didn't work on the other FPGAs.

They did a full analysis of the solution and found out that although some inputs and outputs were mapped to a closed loop not connected to VDD or GND (they had no power and no output), if they were removed from the program on the working FPGA, the adder stopped working. They finally had to chalk it up to relying on electron migration and/or induction currents in the closed loops for a correct answer. They'd accidentally made something like a quantum adder, but it was entirely specific to the silicon they'd evolved it on, making it useful, but not interesting.

Re:It's the little things that impress (3, Interesting)

Bat Country (829565) | more than 4 years ago | (#29703921)

gah! Interesting, but not useful.

Re:It's the little things that impress (1)

TheLazySci-FiAuthor (1089561) | more than 4 years ago | (#29704289)

(wandering two steps offtopic here)

I think FPGA/GA evolved, silicon specific processing would be useful.

GA and FPGA are two animals of science for which I have great interest.

In this case, although this creation was specific for that silicon the creation was still theoretically more simple than should be possible.

My point is that, even though *mass-production* of "magic chips" may not be possible, simple unit-by-unit production of may be.

For supercomputing tasks or other very specialized areas of computing, there are often one-of-a-kind units anyhow - if some high-performance, or highly-efficient device can in fact be made, I see no huge drawback to using it for at least research purposes.

This is potenitally useful: Evolving highly-simplified or highly-optimized, one-of-a-kind chips or devices for very specific tasks with disregard for reproduction of the device.

It doesn't matter if you can't reproduce the machine, as long as it produces the desired output.

Thoughts?

p.s. I read this account before on /. - is this in fact a true account? I don't mean to strike at you, I simply remain highly-skeptical and higly-hopeful. I would like to see some citation. After all, I seem to recall hearing a similar story about GA and FPGA, complete with the inability to reproduce the outcome - only in that case (in my memory) I recall the function of the chip or device to have been caused or enabled by some other electronic device in the room outputting some kind of interference at a regular interval.

p.p.s and to move slightly back on topic, if persistent current exists, the that seems a step-towards cold-computing, or persistent computing: correct me if I am mistaken.

Re:It's the little things that impress (2, Interesting)

Bat Country (829565) | more than 4 years ago | (#29704407)

Well, in theory they should have been able to reproduce the process on a different FPGA, resulting in a different "optimal" adder which may be more or less optimal. Since it seemed to rely on self-interference caused by imperfections in the chip, you'd just have to evolve on other chips until you found a similarly optimal solution. The reason it was only interesting but not useful is that an FPGA is a lot bigger than an actual adder circuit. It took the whole FPGA to evolve the minimal adder and trying to simplify something that specific (and probably relying on exact distances and input voltages to produce the desired output) would be essentially impossible. Thus trying to evolve individual circuits by that method which undercut theoretical optimum designs would require far more waste, far less space and cost efficiency and far more power.

A more appropriate use of GA would be to develop actual silicon in a full simulator much the same as that evolved antenna [nasa.gov] project NASA backed. If you could come up with a design that always worked as long as fabrication succeeded, that's much more productive and far more efficient.

That the story is true there's little reason to doubt. Ever since that guy in 1997 evolved a 64x64 speech recognition chip on an FPGA using GA, people have been going batshit trying to take advantage of the magic of GA. The odds are good that an optimal solution will arise on a chip which turns out to be specific to that chip. I imagine if you used a GA to evolve something specific to those faulty Pentium chips that it would fail to operate properly on a machine with a non-broken ALU.

Whether it had anything to do with my instructor's company, on the other hand, I don't know.

Re:It's the little things that impress (1)

TheLazySci-FiAuthor (1089561) | more than 4 years ago | (#29705031)

Excellent clarification!

My misunderstanding indeed hinged on the fact that the FPGA itself was the only device on which this 'simple' adder could operate (due to specifics of its material structure).. I must have also thought the FPGA itself to be simpler than an analogous, hard-wired circuit.

Thank you :)

Re:It's the little things that impress (1)

oPless (63249) | more than 4 years ago | (#29705523)

Grandparent post is correct. To anthropomorphize things a little...

GA/GP cheat unreservedly. If they find even the smallest amount of wiggle room in your fitness function, they will leverage it, and give you 'interesting' results.

That includes not only the fitness scale you give it, but on how you measure the fitness too.

All heady stuff, I've wasted many days of cpu cycles playing around with GP - great fun :o)

Re:It's the little things that impress (1)

metaforest (685350) | more than 4 years ago | (#29709331)

In this case, although this creation was specific for that silicon the creation was still theoretically more simple than should be possible.

Seem reasonable to me. Until the LM741 was put into production transistor op-amps were voodoo. Each assembly would have to be manually tuned for the desired function. Even with the LM741 (and other integrated op-amps) some tweaking is still required. I am sure that many common digital functions in IC applications have to be 'tuned' during final assembly to make them work as designed. What we get in the part-tray/anti-static tube is a tuned circuit trimmed to spec.

Re:It's the little things that impress (4, Informative)

eh2o (471262) | more than 4 years ago | (#29704979)

Induction isn't a quantum-scale effect, just plain old electromagnetics. Floating pins (that is, any pin configured as an input but not connected to a circuit) are notorious for causing strange effects that mess up both digital logic and analog sensing. Some pretty spooky behavior can result, like states that change when you wave your hand over the chip (due to the capacitance created by the proximity of the hand).

The input pin-state doesn't allow significant current to flow, and all microcontrollers use it as the default state on power-up since it negates the possibility of a short circuit. But if an input isn't connected to anything, it will generate spontaneous readings due to charge accumulation / current drift--some of which might be inductive but it can also be plain old resistive pathways since there are no perfect insulators (for example a PCB might look like a resistor of a few dozen megaohms, or even less if the board is dirty from handling).

This is easily dealt with by grounding the unused pins, either externally or internally (by switching it to an output low state). This comes up often enough that forgetting to assert disconnected pins to ground is what I'd call a classic "101" embedded hardware design bug. I've done it a few times myself and had various apparently inexplicable results followed by feeling stupidity when I realize whats actually happening.

old question related to parent.... (1)

proto (98893) | more than 4 years ago | (#29715419)

I'm not an electronics expert so tell me if you've heard of this question before. Is "capacitance" the reason the screen image improved when I touched the "rabbit ear" antennas of my old analog TV? Now that Digital TV is here this question loses its relevance, but its old question I never had an answer for.

Re:It's the little things that impress (1, Interesting)

mulaz (1538147) | more than 4 years ago | (#29703913)

If we study really small currents, and develop the technology around it, and bring the "normal" currents (~mA) down (to ~uA), a battery that today lasts 1 day (smartphone under heavy use), will last a 1000 times more (3 years).

Of course, this is true for logical circuits, etc... power used for example for (back)lightning can be brought down only by some level (not even close to uA), where we get close to 100% power->light output.

As an ME... (1, Offtopic)

Thelasko (1196535) | more than 4 years ago | (#29703503)

As one of the few mechanical engineers on Slashdot, I approve of this experiment.

Wait... (4, Interesting)

Anonymous Coward | more than 4 years ago | (#29703525)

“Yet these currents will flow forever, even in the absence of an applied voltage.” is this some form of perpetual energy or am I a fool?

Re:Wait... (1)

NoYob (1630681) | more than 4 years ago | (#29703669)

“Yet these currents will flow forever, even in the absence of an applied voltage.” is this some form of perpetual energy or am I a fool?

Try to harness it.

My guess, the force that's moving the current would be related to the force that keeps the electrons buzzing around the nucleus.

I'm sure there's going to be someone, sometime, who's going to "develop" some technology with some really good looking math, put wires perpendicular to the rings to get the "induced current" and sell it to Wall Street as a perpetual motion generator - doesn't violate the First Law of Thermodynamics because it's on the quantum level!

Hmmmmmmmmm, I'm thinking....

Re:Wait... (1)

oldhack (1037484) | more than 4 years ago | (#29703835)

It's QM. Like vacuum is not real absolute vacuum, and shits zap in/out randomly. Conservation of energy will be statistical, is my guess, and they will have a catch against trying to make use of temporary disequilibrium. Like trying to steal energy from the past and the future with time machine.

Re:Wait... (4, Informative)

Rising Ape (1620461) | more than 4 years ago | (#29703899)

Conservation of energy is absolute, as far as we know, not statistical, even in QM. It would be a major revolution in physics if that weren't the case, as conservation is associated with important symmetries, such as the laws of physics not changing from past to future.

Entropy increasing *is* statistical, but no, you can't get around it. See Maxwell's Demon or the Brownian ratchet.

There are existing examples of persisting currents, in superconductors. No way to get energy out without reducing the current, of course, and you have to put energy in to get it back.

Re:Wait... (2, Interesting)

oldhack (1037484) | more than 4 years ago | (#29703997)

Hm... I should have wrote CoE is averaged over time/space instead of "statistical"? Otherwise, stuffs can't pop in/out into "vacuum"?

Re:Wait... (0)

Anonymous Coward | more than 4 years ago | (#29704987)

Who tells you that this is happening?

Re:Wait... (0)

Anonymous Coward | more than 4 years ago | (#29705385)

Yo mama. STFU, adults talking here.

Re:Wait... (2, Interesting)

Rising Ape (1620461) | more than 4 years ago | (#29705237)

It doesn't pop in and out as such - certainly you can't just pluck particle/antiparticle pairs out of empty space without supplying energy. For a quantized field, the vacuum expectation value (crudely, the average value in empty space) for certain quantities can be non-zero, just like atoms in a ground state have a definite non-zero energy.

Even if the energy of a ground state is non-zero, you can't take that energy out - energy must be conserved and there's no lower energy state for free space to fall into.

The energy density of the vacuum is in essence undefined (in quantum theory at least - in general relativity it's a different matter, which is where problems come in). Only energy differences matter.

Well, that's my understanding anyway, but then I was an experimentalist, not a theorist. Perhaps someone will come along and correct me.

Re:Wait... (1)

tenco (773732) | more than 4 years ago | (#29776903)

The energy density of the vacuum is in essence undefined (in quantum theory at least - in general relativity it's a different matter, which is where problems come in). Only energy differences matter.

Is it really undefined? Can't we say it's infinite? Sure, you can discriminate part of the frequencies by setting boundary conditions with a cavity (and that's where the Casimir effect comes in) - but that's it.

Re:Wait... (1)

BitterOak (537666) | more than 4 years ago | (#29706135)

Hm... I should have wrote CoE is averaged over time/space instead of "statistical"? Otherwise, stuffs can't pop in/out into "vacuum"?

That's correct. It's an example of the Heisenberg Uncertainty Principle. Time and energy are conjugate variables, so the shorter the time you look at something, the greater the non-conservation of energy, according to the relation Delta t times Delta E < hbar.

Re:Wait... (0)

Anonymous Coward | more than 4 years ago | (#29707221)

Oh no, not the energy-time uncertainty principle!

It is not rigorous (mostly due to the lack of a time operator), and interpretations vary, but none of them say the it allows energy not to be conserved.

One well-understood interpretation is that the grater the uncertainty in the energy of a system, the shorter is its lifetime.

Re:Wait... (1)

tenco (773732) | more than 4 years ago | (#29776913)

Time and energy are conjugate variables,

Time and frequency are.

Re:Wait... (0)

Anonymous Coward | more than 4 years ago | (#29782783)

Time and energy are conjugate variables,

Time and frequency are.

reciprocal, dolt

Re:Wait... (1)

frieko (855745) | more than 4 years ago | (#29703955)

Well, yes. The law of conservation of energy states that ALL energy is perpetual. Perhaps you're thinking of perpetual energy production.

Re:Wait... (1, Insightful)

PPH (736903) | more than 4 years ago | (#29704481)

An applied voltage accelerates electrons. In the absence of anything slowing them down, like interactions with atoms where they lose energy, no voltage should be required for them to keep moving forever.

So the result of this experiment raises the question: Why are these electrons not interacting with anything? We have some good ideas about why they don't in superconducting materials. So this extends the realm of this behavior into other states of matter. Or its new behavior.

It's conservation of energy (2, Informative)

Nicolas MONNET (4727) | more than 4 years ago | (#29704715)

As long as you're not taking energy out of it, no, it's not. Well, actually, energy is perpetual; it's power that's not. Perpetual motion exists in a vacuum. It just doesn't on earth with all that friction that requires perpetual power to counteract.
You can also maintain a perpetual current in a supraconductor, as long as you're not messing with the magnetic field it generates. But just like a hard vacuum, it's not a natural state down here.

Perpetual energy, but no power. Memory apps. (4, Interesting)

Richard Kirk (535523) | more than 4 years ago | (#29709987)

The energy is perpetual, so you aren't a fool. Congratulations. However, for as long as it lasts, no-one gets any power out of it. It is just a tiny, fixed current going in a circle giving a small, static magnetic field.

On a smaller scale, consider electrons circling a nucleus. They are waves, and not like little planets orbiting a sun, but some of them are going in circles endlessly. They aren't losing energy because they have to be in one quantum state, or emit or absorb a whole chunk of energy to go to another. They can't slowly leak their orbital energy away and spiral into the nucleus, which is good thing for us as matter as we know it would rapidly cease to exist.

What we have here in our little ring is the same sort of thing, but on a larger scale. You have lots of electrons, all in a stable state. Instead of a few electrons orbiting a single nucleus, you have a lot of outer electrons spread out amongst a lot of nucleii. If you have a stable state, then the loop will enclose an integer number of magnetic flux quanta. The most likely state, and the lowest energy state if there is no applied is to have no persistent current, and zero flux quanta. However, at a finite temperature, it is likely that the system is not in its lowest energy state. Why doesn't the loop let the flux quanta out and drop to the lowest energy state? Well, the quantum maths is a bit tricky, but a rough explanation goes like this...To let the flux go, one part of the ring has to stop conducting at one point and put up a resistance. This will let out the flux quantum and absorb the energy as it goes. While this makes sense from energy terms, there is no reason why one bit of the loop should do it rather than another. The superconducting SQUID devices mentioned in the article are a superconducting loop with a weak point so you can have all sorts of elegant fun with the physics as flux quanta go in and out.

So, this is no use as an energy source, but it could be very useful as a form of memory. Suppose you have a loop of 18 carbon atoms with one hydrogen to each - a bit like benzene but bigger. Like benzine, it has a loop of pi electrons above and beneath, and these electrons can do the same thing. The first energy state (one flux quantum in the loop) is about 0.5 eV above the ground state, so it should be stable at room temperature. You can read the energy state non-destructively by approaching a similar loop with a weak point (a bit like a SQUID, again), or you can destructively blank the state by twisting the ring, destroying the pi delocalization. This is not a new idea - I know it was talked about in the eighties.

Could this be used for memory? (2, Interesting)

John Hasler (414242) | more than 4 years ago | (#29703645)

n/t

Re:Could this be used for memory? (2, Informative)

ikkonoishi (674762) | more than 4 years ago | (#29703747)

Possibly. It was just measured they need time to figure out what the limits of it are.

Re:Could this be used for memory? (2, Interesting)

Interoperable (1651953) | more than 4 years ago | (#29704929)

I looked into the effect of persistent current a bit and it turns out that someone has figured out how to use it as a photonic memory. Check out the Wikipedia article [wikipedia.org] on Ahranov-Bohm nano-rings.

The Harris Lab website [yale.edu] has a number of papers on the persistent current effect. The Ahranov-Bohm effect is one of the weirdest observed effects in physics so reading about the persistent current effect that arises from it is (arguably) a fun read.

I'm no EE (1, Offtopic)

camperdave (969942) | more than 4 years ago | (#29703649)

I'm no electronics engineer or chip designer, but couldn't they make things more compact by going vertical? Chips are always planar. Wouldn't you get a faster IC if you stacked the components instead? Make a sandwich; silicon, insulator, aluminum, insulator, silicon, insulator, aluminum, etc. (The aluminum would be for heat sink purposes.) Build it up 16, or 32, or even 64 layers thick. Each layer could be a processor core.

Re:I'm no EE (1, Informative)

Anonymous Coward | more than 4 years ago | (#29703675)

They are already developing them.

googling it and picking a random one

http://www2.computer.org/portal/web/csdl/doi/10.1109/MM.2007.59

Re:I'm no EE (3, Informative)

Xiaran (836924) | more than 4 years ago | (#29703689)

I am an Electronics Engineer and you are forgetting about heat dissipation. We would love to have 3 dimension integrated circuits but unless we come up with a good way to dissipate the heat they will be little molten balls of almost pure Si(or GaAs).

Re:I'm no EE (1)

ahem (174666) | more than 4 years ago | (#29703739)

So the GP suggested a layer of aluminum for just that purpose. Is the heat carrying capacity of aluminum insufficient? What if you had active cooling sucking heat out of the aluminum at the chip's edge?

Re:I'm no EE (3, Informative)

Anpheus (908711) | more than 4 years ago | (#29703791)

It takes a heatsink the size of a small house to deal with current overclocked CPUs and that's on a single plane. The more layers you put between your heatsink and the bottom-most layer of your CPU, the poorer the conduction of heat away from it and the worse off you'll be.

He's quite right, without a heatsink the latest CPUs instantly rise to over 90C and then reset or throttle themselves down to unusable levels.

Re:I'm no EE (2, Interesting)

Anonymous Coward | more than 4 years ago | (#29706787)

That is assuming the processor manages to react in time. I've seen videos of processors whose heatsinks have been removed, where the processors appear to vaporize. What they actually did was apparently heat so quickly they deformed in such a way as to launch themselves from the socket at great velocity. (Too fast for the cameras to catch with any real clarity.)

So basically, the processor took off like a rocket, (albeit technically it was a projectile upon leaving the motherboard, rather than having any period of powered flight).

Really gives one an appreciation of the heat being generated by a modern CPU.

Re:I'm no EE (1, Interesting)

Anonymous Coward | more than 4 years ago | (#29703793)

we already do something similar [wikipedia.org] and the problem is that it is not sufficient for 3D. Now if we could stack individual components of motherboard on top of each other and use lateral cooling current ...

Re:I'm no EE (4, Interesting)

amorsen (7485) | more than 4 years ago | (#29703811)

Is the heat carrying capacity of aluminum insufficient?

Depends on how thick you make the layer. Look at the kind of heat sink high performance chips today, for chips with just one layer. Multilayer chips need comparable cooling performance per layer.

You can of course add a few low-power layers to a high-power chip, which may be worth it at some point just to shorten interconnect wires (or in order to use inductive coupling). It's a lot of complexity for what is so far a small gain though.

Re:I'm no EE (0)

Anonymous Coward | more than 4 years ago | (#29782805)

deep, not thick

Re:I'm no EE (1)

The Archon V2.0 (782634) | more than 4 years ago | (#29704061)

So the GP suggested a layer of aluminum for just that purpose. Is the heat carrying capacity of aluminum insufficient?

Yes. I'm no EE but I am something of a hardware geek who has installed a lot of processors. There's a reason heatsinks are so massive. If a thin layer of anything was so good that it could be stuck between two processors and provide adequate cooling to both with no airflow, then CPUs wouldn't even have fans.

What if you had active cooling sucking heat out of the aluminum at the chip's edge?

Active cooling is not that good. Way better than passive cooling of course, but you're asking too much of it. Assume a "3d" processor was effectively 4 Core 2 Duos sandwiched together. Look at the heatsink/fan you need to run ONE of those. You'd be asking one or two small fans to do the work of four large ones, and that's assuming perfect heat transference from the center of the processor stack to the outside. (Hint - nothing's perfect. Again, this is why we have big heatsinks - because a small one just can't magically dump all the heat into the air.)

Plus, surface area increases slower than volume. The volume would be generating the heat, while the surface area would be removing it. Just not gonna work on anything except an incredibly tiny (by current chip standards) processor. It's far easier to make something the size of the i7.

Re:Heat dissipation (1, Insightful)

Anonymous Coward | more than 4 years ago | (#29703809)

The problem is not heat dissipation: it is the inefficiency of our computational machinery.

Logic devices have almost zero efficiency in that for each watt going in, nothing or almost nothing is used to deliver the logic movement. Almost everything is converted into heat, or physical motion.

So... 500 watts into a server is 500 watts of heat to dissipate... and zero watts of computing (whatever that would be).

What we need is a more efficient computing design.

Re:Heat dissipation (0)

Anonymous Coward | more than 4 years ago | (#29704949)

On CMOS devices static power is an order of magnitude less than dynamic power(logic movement).

Re:Heat dissipation (1)

PhunkySchtuff (208108) | more than 4 years ago | (#29706353)

Check out Reversible Computing [wikipedia.org] for some info on where this isn't the case - the idea is to have it so that the results of a computation doesn't result in the waste of energy as heat.

4-d circuits already! (1)

kahizonaki (1226692) | more than 4 years ago | (#29709655)

3-d is the past, upwards and onwards!

(Actually, I'm not kidding. The brain is a 4-dimensional circuit/computer. In addition to being spatially extended in three dimensions, computations are also temporally extended (thus adding a fourth). What might be an atomic instruction on a modern "2-d" CPU could require all four dimensions of the brain. Think in terms of remembering a word, or the lines to a poem. You get the first part, and some aspect of that influences the trajectory of the system to move to a state that represents the next part, and so on. There is a dynamic feedback loop which is extended over (and reliant on) time. Cf. dynamical systems theory.)

Re:I'm no EE (0)

Anonymous Coward | more than 4 years ago | (#29704185)

I'm no electronics engineer or chip designer, but couldn't they make things more compact by going vertical? Chips are always planar.

This is wrong. RAM chips use 3D structures to create the capacitors used to store charge. VFETs are another application.

The thing is that these processes are vastly more expensive and harder to design with. It would probably be much cheaper to look at how dies are mounted in packages, and then on motherboards. Why are we still using package technologies from the seventies? We should be able to do something completely different by now.

And talking of that - need to run. Off to see John Cleese perform now... :-P

Re:I'm no EE (1)

drinkypoo (153816) | more than 4 years ago | (#29710115)

There was a paper released a little while back (and discussed here IIRC) which mentions that when you put water through a smaller and smaller capillary tube, when you get it down to absurdly small sizes the water actually goes faster. I forget what effect was responsible, but perhaps the solution is to have a tiny heat pipe integrated into the IC, and to use another heat pipe to carry heat away from it (at the CPU cooler level.) No solid metal can dissipate heat fast enough for what you suggest; a single-layer CPU might draw 50-75 watts already, and dissipating that heat is non-trivial. The first Pentium chip, whose dissipation was in that same ballpark, melted its socket.

I'll Be Hornswoggled! (1)

PingPongBoy (303994) | more than 4 years ago | (#29703713)

Sounds like perpetual motion!

Nanometers? (4, Funny)

one cup of coffee (1623645) | more than 4 years ago | (#29703729)

"Modern processors rely on wires mere nanometers wide." -Nothing to see here, move along.

Direction? (0)

Anonymous Coward | more than 4 years ago | (#29703741)

Left to right or right to left?

Re:Direction? (2, Interesting)

Anonymous Coward | more than 4 years ago | (#29703867)

my right or your right? better question -- clockwise or counter clockwise and with respect to
1. the cantilever?
2. gravity?
3. earth's magnetic field?
4. solar magnetic field?

Re:Direction? (2, Interesting)

smoker2 (750216) | more than 4 years ago | (#29704305)

Better answer - both directions. RTFA.

Whoop-dee-doo! (0)

TheRealMindChild (743925) | more than 4 years ago | (#29703783)

But what does it all mean, Basil?

And I have measured the.... (1)

DragonTHC (208439) | more than 4 years ago | (#29703853)

wait, did they just say theoretical?

granted, it's all a theory, but is it really theoretical if they've measured it?

Re:And I have measured the.... (2, Informative)

Anonymous Coward | more than 4 years ago | (#29703919)

Yes, it just makes the theory stronger. Just like the theory of evolution. For all practical purposes it is fact. We just aren't so arrogant as to call it a law anymore.

Re:And I have measured the.... (1)

mugurel (1424497) | more than 4 years ago | (#29704021)

Yale physicists have successfully measured a theoretical 'persistent current'

Yes, they said theoretical! They must've measured it theoretically, I'm sure!

I guess they mean they have measured a quantity that until that point had only be postulated (based on theory).

quantum power strip (1)

nitehawk214 (222219) | more than 4 years ago | (#29704361)

This reminds me of people who plug power strips back into themselves, and then wonder why it doesn't power their devices.

Re:quantum power strip (1, Interesting)

Anonymous Coward | more than 4 years ago | (#29705207)

Funny you should say that. I first began considering the idea of "persisstent current" when I realized your setup could actually retain current if there was no resistance.

These are some weird physicists (0)

Anonymous Coward | more than 4 years ago | (#29704907)

they form rings and have current passing through them.... Weird stuff....

Short explanation (5, Informative)

Anonymous Coward | more than 4 years ago | (#29705387)

I am a solid state physics Ph.d. student. There seems to be a lot of confusion on how these things work, which is unsurprising given the lack of details in this slightly sensationalist story published by Yale about work done at Yale. Hopefully this helps a bit.

First, these currents don't spontaneously arise out of the blue. There is an external applied magnetic field, so every metal ring has at least 1 flux line passing through it. As most should know, a changing magnetic field induces an electric current. Normally, in non-superconducting metals, inelastic scattering of electrons causes the current to dissipate (ie there is resistance).

The unique thing about these metal rings is that they are smaller than the electron's phase coherence length, or the distance the electron travels before it is scattered inelastically. Electrons will scatter elastically off of impurities, but those collisions are not dissipative.

This Yale group by no means discovered this phenomenon, nor are they the first to measure it. What they did was measure it with greater accuracy. The things that have been unclear for awhile are the direction the current travels in and the magnitude. Hopefully these new measurements will shed some light on the matter.

P.S. I hate Slashdot's comment system. Every time I clicked off this typing box, it refused to accept any input until I clicked randomly around the screen for at least 15 seconds.

Re:Short explanation (5, Informative)

Anonymous Coward | more than 4 years ago | (#29705465)

I should also add to this that one must remember that electrons are as much waves as they are particles. Because of the circular geometry, electron wave functions around the loop acquire a phase in integer multiples.

The group is measuring the changes in magnetic moment that these currents produce.

Re:Short explanation (0)

Anonymous Coward | more than 4 years ago | (#29782843)

No, elèctròns are motes; they /make/ waves.

Re:Short explanation (1)

John Hasler (414242) | more than 4 years ago | (#29707665)

What, besides lowering the temperature, can be done to increase the phase coherence length?

Re:Short explanation - Slashdot's comment system (0)

mspohr (589790) | more than 4 years ago | (#29709049)

Actually, Slashdot's comment system is run using a quantum plugin that actually implements these loop current effects. Most computers run this just fine but you, being a physics Ph.D. student have skewed the system since it can sense that you are starting to understand how it works. It cannot tolerate this encroachment so it is trying to throw you off track by distracting you.

Do not be discouraged. Look on this as a challenge. Keep clicking randomly and this will confuse it enough for it to start working again.

Apparently I am the only one to read this as if (0)

Anonymous Coward | more than 4 years ago | (#29705901)

the currents were in the physicists, not some wires. I had a crazy mental image of some guys in lab coats grabbing their ankles and/or applying ammeters to their heads and feet.

link to original paper (0)

Anonymous Coward | more than 4 years ago | (#29710601)

The original paper was posted in June http://lanl.arxiv.org/abs/0906.4780

In the quantum world the question is how long it takes to make a transition. Only with a transition do we have exchange of energy and therefore dissipation.

"persistent current" just means the authors of the paper did not observe a transition. For superconducting rings the theoretically estimated time to see a change is longer then the age of the universe, a good definition for "impossible".

Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>