Beta
×

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

top

The Man Who Invented the 26th Dimension

Oh, and what about the graphic dimensions and hidden dimensions? Just because your working physical-space dimensionality fits in 640K -- at least, if you have a backing store with a few megadimensions to spare -- doesn't mean that you don't need someplace for God to hang out and run things, or dimensions needed for your inner spiritual eye to be able to visualize the projective results of the stuff in the 640K.

Now, for just 2^{640!} dollars, I'd be happy to sell you an expansion space with an extra 400K dimensions, to let you offload God into a meta-space of Its own and still have sufficient dimensional resolution to be able to achieve satori or visualize the cosmic whole in some sort of projection. And it comes with both serial and parallel dimensional portals, not to mention a built-in communication channel connecting your working dimensionality with God-space. It also permits you to expand your paltry 64K dimensional mother-Universe to a proper full-scale Universe with all 2^{1024} dimensions that the underlying physics can use -- with indirect dimensional addressing -- accessible.

For the first time, your matter assemblers and compilers will have the dimensions that they need to work. Inflation will be tremendously accelerated. You can cut the time required for a full-scale big bang reboot to end up with Intelligent Life from 14 billion years to a mere 20! Just think of what you can evolve after that!

rgb

top

New Mayhem Malware Targets Linux and UNIX-Like Servers

Or, to put it another way, you can't fix stupid.

rgb

top

New Mayhem Malware Targets Linux and UNIX-Like Servers

Surely you must be joking. There have been Explorer bugs that went unpatched for six months. No operating system is immune and security flaws arising from bugs in code are an inevitable accompaniment to having code in the first place, especially complex code with lots of moving parts (some of them infrequently tested/visited), but Microsoft has historically been Macrosquishy when it comes to security and patches. LOTS of holes, and many of them (in the historical past) have taken a truly absurd amount of time to be patched, resulting in truly monumental penetration of trojans and viruses via superrating wounds like Outlook. I still get an average of one email message a day that makes it through my filters purporting to be from a correctly named friend or a relative and encouraging me to click on a misspelled link. You think those messages are arising from successful data-scraping via Linux malware or Apple malware or FreeBSD malware?

Perhaps, driven by the need to actually compete with Apple and Linux (including Android) instead of resting on their monopolistic laurels, they have cleaned up their act somewhat over the last few releases of Windows, but on average over the last 10 or 15 years, certainly since the widespread adoption of apt and yum to auto-maintain Linux, the mean lifetime of a security hole in a Linux based system all the way out to user desktops has been around 24 hours -- a few hours to patch it and push it to the master distro servers, mirror it, and pull it with the next update. Microsoft hasn't even been able to acknowledge that a bug exists on that kind of time frame, let alone find the problem in the code, fix it, test it, and push it.

If they are doing better now, good for them! However, look at the relative penetration of malware even today. Linux malware has a very hard time getting any sort of traction. Apple malware has a very hard time getting any sort of traction. Windows? It's all too easy to whine that it gets penetrated all the time because it is so popular and ubiquitous, except that nowadays it is neither.

rgb

top

Harvesting Energy From Humidity

While that may seem slow, people in remote areas may have few alternatives.

Other than:

Solar power, at roughly $1/watt (and then "free" for 10-20 years), price falling on a nearly Moore's Law trajectory. Wind power -- expensive, unreliable but simple technology and humidity isn't reliable either. The entire panoply of standard sources -- coal, oil, gasoline, nuclear, hydroelectric, alcohol, diesel, methane... which we can deliver a variety of ways including simply delivering a small generator and fuel. I would truly be amazed if a new, patented technology of this sort was within an order of magnitude -- or even two -- of the cost of a solar source superior in nearly every way, and there are very few places where the humidity is high, temperatures are reasonable, and the sun does not produce enough light to make this work. This is truly an edge technology unless they make it astoundingly cheap. rgb about a month ago top New Evidence For Oceans of Water Deep In the Earth Re:Ingredients for water? (190 comments) The interesting question is, I suppose, whether or not this source of "water" is responsible for the oceans, or if they came about from e.g. cometary impacts post-crust formation (before the crust formed they don't really count as "cometary impacts", it was all just part of the formation process). This has a significant impact on the probability of finding water on extrasolar planets and hence on the CO_2/O_2/H_2O/N_2 life cycle establishing itself. There is of course evidence in the form of e.g. Europa and Titan that there is abundant water out there that COULD form seas on planetoid objects in our own solar system if the temperature/atmosphere composition range were right, but I'm not sure that we have a compelling, evidence supported picture of the details of the Earth's early evolution and how much of it was a comparatively rare accident, how much is commonplace in planetary formation. If we built a really, really big telescope at e.g. one of the Lagrange points -- maybe something with a 100 meter or even a kilometer primary mirror and similar scales for the optical paths -- we might be able to "see" extrasolar planets at a level of detail sufficient to resolve the chemistry and maybe more of smaller planets and planetary objects, not just the ones with orbits and mass parameters sufficient to make the current cut. And see a lot of other really cool stuff as well, of course -- such an eye in the sky could look across time to the big bang and immediate aftermath a lot more effectively than the Hubble. Let's see, a primary mirror with a diameter of d = 1000 meters, \alpha = 1.22 \lambda/d, visible light is roughly 1 micron, so diffraction-limited resolution would be order of 10^9 radians. Nearish stars are order 10^16 meters, so we could barely resolve details 10^7 meters in size. Darn, that's just over the size of he Earth. We could actually photograph Jupiter-sized planets, but Earth-like planets would still just be a (fat) dot. Of course in the UV spectrum we could get one more order of magnitude out of ordinary optics so we could possibly see continent sized features and oceans in the UV (and resolve an Earth as more than just a dot). And people might find a way to cheat resolution a bit more than that -- build a coherent array of smaller telescopes, whatever. It would need damn good optics, as well. One can dream, right? The Big Eye. Crowdfunding, anyone? If everybody on the planet contributed a dollar a year, we could build it inside a decade. Or maybe two. I might even live to see the first pictures come back. But probably not. rgb about 2 months ago top Evidence of Protoplanet Found On Moon Re:"Simplest explanation" (105 comments) Damn, I had to give up modding this to answer, but I can't leave this. One cannot "capture" a body the size of the moon by any two body elastic (e.g. gravitational) interaction. Within irrelevant perturbations such as gravitational wave radiation (presuming such a thing to exist), energy is conserved, and if it starts out unbound to the Earth it will end up unbound to the Earth. One can capture in a three (or more) body interaction, but in that case the missing energy has to go someplace, and we are talking about a LOT of energy in the case of an orbiting moon. Enough energy to basically melt the moon and the earth and then some. One would expect to see some sort of orbital remnants of such a many-body event, and all of the other bodies in the solar system are a bit too far away to be good candidates in terms of the forces needed, and show none of the orbital perturbation one would expect as a consequence. That leaves inelastic events. Tidal interaction is inelastic over time, but to make it strong enough to mediate a "capture" it would damn near be a collision anyway, brushing up on Roche's Limit (look that up). Also, that too would leave the nascent moon in an orbit much closer than the initial radius of its apparent orbit. Also, it wouldn't explain the apparent deficit of heavier elements and an iron core in the moon (thought to have been literally blown out of the incoming body in the collision and either ejected altogether to carry away the missing energy and momentum needed to leave the remnant in orbit or absorbed into the Earth) and a bunch of other things. So really, the collision hypothesis makes "enough" sense and is consistent with enough data that it is AFAIK the "accepted" explanation of the moon's origin, with the usual caveat that contrary evidence or a better argument in the future might change that as we cannot easily be certain about events 4.5 billion years ago. rgb about 2 months ago top Matthew Miller Named New Fedora Linux Project Leader Re:Gnome; Mate; Cinnamon; Unity; Xfce4...Save Me (24 comments) It is, indeed, sad. Gnome 2 was a perfectly usable desktop. Gnome 3 as you say looks like and acts like a "kinda" tablet interface, but it doesn't make it on tablets and truly sucks on desktops and laptops compared to G2. My own solution has been to use a release that still supports G2. It is an imperfect solution, but I work at the interface level, and the imperfections and risks are all occult or fixable with care, where nothing can "fix" G3 but snipping the entire fork and pretending that it never existed. In the meantime, I have my six hot keyed desktops, my keystroke-cyclable windows, and can work (or play) for hours in ten or twenty windows and never touch my mousepad or take my fingers off of the home keys. That's the real problem with G3. Tablets are lovely in the Macintosh/Apple sense -- you can learn to use their interface in a day, and pay for that knowledge for a lifetime in reduced productivity compared to what you could realize with a more complex interface with more configurable options and ways of doing things. If they had kept G2's keyboard/mouse driven structure and general function and customizability and merely added support for a user-selectable touchscreen swipe mode, I'd a) never have noticed and; b) if I ever found myself trying to run G3 on a tablet and DID notice, pleasantly surprised to find that it had a tablet-savvy mode that otherwise preserved my desktop setup (as best as possible given the differential screen sizes). The other sad thing about not only Gnome but most of the rest of the desktops is that no progress was made in places where it would have been GOOD to make it. For example, I work on at least three or four different (all linux based) systems. They have different screen resolutions, different sized hard disks, different speed CPUs, different capacity memories. Yet Gnome is still too stupid for me to be able to clone my home directory across those systems -- or e.g. NFS mount a single home directory from a server on all of those systems -- and have it just work, fixing the font sizes, default window sizes, and so on. I've written my own highly custom startup scripts in the past that do things like determine architecture etc and then do the right thing by literally overwriting some of core startup data or follow complex conditional branches when logging in, but this sucks and is a pain to maintain. Yet nobody even tries to do better, at the right level (that is, within e.g. gconf or the gnome configuration manager itself). Linux has actually gotten worse as a client-server, shared home directory architecture, compared to what it was when it was closer to e.g. SunOS and so on back in the 90's (and it wasn't completely great back then). Whine, whine, I know. If I were a good open source human and wanted this fix, I'd participate. And if I were a frickin' robot who didn't need sleep or if I weren't I but we, me and my ten clones, I would. But one lifetime isn't enough time to do all that I'm doing already. So all I can do is pray that somebody, somewhere, keeps G2 alive or forks it out and develops it in ways that do NOT break the features that I rely on most to support my daily work activities. G3 was actually the deal breaker with me and Fedora -- I stuck with it until then and even thrived, but to get G2 I had to go back to Centos 6 (and overlay the good parts of Fedora, namely Centos ports of the key Fedora add-on software). If Fedora re-embraced a functional G2 fork or clone (that worked and was well-maintained) I'd be perfectly happy to go back to it. I never minded having bleeding edge software handy, even at the moderate expense of stability -- as long as they don't break Fedora Core to the point where it interferes with workflow, that is, at the desktop provisioning level. rgb about 3 months ago top Red Dwarfs Could Sterilize Alien Worlds of Life Re:I wish they'd make up their minds... (76 comments) Fair enough (and yeah, I know about Gamma Ray bursts:-). The Sun could hiccup tomorrow and wipe out most of the life on the planet in an event hardly noticeable from light years away (Larry Niven wrote a lovely short story based on this theme) -- it wouldn't even take a full gamma ray burst. But the point is -- why do they assume that planets orbiting a Red Dwarf will not have a magnetic field? Indeed, I would expect the opposite -- if it has a nickel-iron core like the Earth does, one would expect magnetic protection like the Earth has. If it were a super-Earth like the one discussed on /. yesterday, with a density, size and mass all larger than Earth's, you'd even have additional gravity to bind the atmosphere. But Red Dwarfs are the last place to look as a source of "real radiation", even if you are closer to the sun -- if one has to BE closer to the sun, see comments on greenhouse effect -- in order to stay warm. Venus has atmosphere to a fault. Put Venus near a Red Dwarf and wait a billion years or so and maybe its atmosphere will thin enough and alter chemistry enough to support Earth-like temperatures and free water. Make Mars the size of the Earth -- but keep the oceans and a pronounced CO_2 bias in the atmosphere -- and maybe Mars would have sustainable temperatures and equatorial oceans, at least. It's doubtful that the "habitable zone" is as narrow as we might think it is using the Earth and our own solar system as the N=1 sample. And then, it is a big Universe. rgb about 3 months ago top Red Dwarfs Could Sterilize Alien Worlds of Life I wish they'd make up their minds... (76 comments) ...about greenhouse gases. We are told that high concentrations will make a Venus out of Mars, that in spite of the young sun being substantially "cooler" than the sun is now, the Earth's high GHG concentration over most of the last 600 million years is responsible for it being substantially warmer than it is now, etc. Surely there are atmospheric chemistries that would keep iron-core, magnetic field equipped, water-bearing planets nice and toasty a good safe distance away from a red dwarf. Give the temperature, life will (probably) find a way... Of course if it is really the case that temperature is mostly determined by net insolation and perhaps things like the presence of a vast water ocean covering 70% of the surface, with GHGs only contributing an easily saturable "blanket effect" good for a few tens of degrees absolute, well then, I could see that there could be a problem. Also, it is worth remembering that water is a great radiation barrier. We obviously want to find "land life" because of our occupational bias, but as long as the planet has liquid water oceans, who really cares if the atmosphere is too radioactive for genetic stability? First of all, one can still imagine all sorts of ways that animals or plants could evolve to protect their genetic inheritance and re-stabilize a speciesization process -- a half-dozen sexes, for example, with some sort of majority rule on the chromosome slots, using information redundancy to combat entropy as it were (or evolving more advanced stuff -- genetic "checksum" correction of some sort). Red Dwarfs have much longer lifetimes than the sun, and given ten or twenty billion years, who knows what evolution will kick out? It could be that all of the really old, stable, wise life forms in the Universe evolved around Red Dwarves because mutation rates (and consequently rolls of the evolutionary dice) are high. We don't completely understand genetic optimization as employed in actual evolution, any more than we completely understand how the brain's neural networks avoid some of the no-free-lunch theorems and empirically demonstrated flaws in e.g. classification by even the most sophisticated networks we can yet build. I'm not asserting that there is any "mystery" there, but there are damn sure a lot of scientific questions yet unanswered, and speculating about what we might find living in orbit around a Red Dwarf -- publicly and with much fanfare -- when we cannot reasonably go and find out is science fiction masquerading as science, not the real thing. rgb about 3 months ago top Strange New World Discovered: The "Mega Earth" Re:Science Writers: Stop Causing Us Intellectual P (147 comments) The real problem (or interesting thing about this if you don't like "problem") with this is scaling. 2.3^3 = 12.2. If this mystery planet is 2.3 times the size of Earth, one would expect it to have 12.2 (give or take a hair) times the mass of Earth, presuming that it has a similar core structure. It is almost half again more massive. This in turn suggests that the mantel is proportionally less of the total volume of the sphere, or rather, that it has a disproportionately larger core (nickel-iron core densities are 2-3 times the density of the mantel). At a guess, the core alone -- if it is nickel-iron as seems at least moderately reasonable -- is at least half again larger than the size of the Earth. Alternative, its core could contain an admixture of much heavier/denser stuff -- tungsten, lead, gold -- and not be so disproportionate. rgb about 3 months ago top Scientists Find Method To Reliably Teleport Data Re:This research should receive enormous funding. (202 comments) Please excuse my absolute ignorance, but I was under the impression that classical information channel was only required to transmit one of the entangled photons. If one of the entangled photons (or what ever it is that is entangled) was transported elsewhere (truck, fiber optics, what-not) the two entangled would still maintain the same state (spin etc) and information could then be transmitted faster than light by changing the state of one and reading the state of the other. Information cannot be transmitted faster than light as far as we know in standard physics today (barring extreme relativistic things like white or black holes and I doubt even those unless/until experiment verifies any claim that they can). Quantum theory doesn't get around it. You cannot choose the direction to "collapse" or "change the state" of one of the two entangled spins, because the instant you measure it, it "collapses". You might now be able to predict the state of the other end of the channel, but the person there can't because he doesn't know what you measured, so if he measures up or down when he tries (again, supposed "collapsing the wavefunction") he won't know what you measured at your end or (since the two spins are no longer entangled as soon as a measurement is made at either end) what you do to it subsequently. But the real problem (the "paradox" bit of EPR) is much worse than that. Suppose the two "entangled" electrons are separated by some distance D. Non-relativistic naive stupid quantum theory states that when one of the two electrons is measured, the wavefunction of the whole thing collapses. But suppose that D is nice and large -- in gedanken experiments we can make it a light year, why not? In the "rest frame of the Universe" (the frame in which the cosmic microwave background has on average no directional doppler shift) experimenters on both ends simultaneously perform a measurement of the spin state of the two electrons. This (simultaneity) is a perfectly valid concept in any given frame but is not a frame invariant concept. Neither is temporal ordering a universally valid concept. But given a simultaneous measurement of the two spins, which measurement causes the wavefunction to collapse and determines the global final state, given that the entropy of their measuring apparatus (which is responsible for the random phase shifts that supposedly break the entanglement, see Nakajima-Zwanzig equation and the Generalized Master Equation) is supposedly completely separable and independent? By making D nice and large, we have a further problem. I said that the measurements were simultaneous in "the rest frame" (and even gave you a prescription for determining what frame I mean), but that means that if we boost that coordinate frame along one direction or the other, we can make either measurement occur first! That is, suppose the spins are in a singlet spin state so that if one is measured up (along some axis) the other must be measured down. Suppose that in frame A, spin 1 interacts with its local measuring apparatus first and is filtered into spin down. This interaction with its local entropy pool -- exchanging information with it via strictly retarded e.g. electromagnetic interactions -- supposedly "transluminally", that is to say instantaneously in frame A -- "causes" (whatever you want that word to mean) spin 2 in frame A to collapse into a non-entangled quantum state in which the probability of measuring its spin up in that frame some time later than the time of measurement in frame A is unity. In frame B, however, it is spin 2's measurement that is performed first, and as the electron interacts with its entropy pool you have a serious problem. If you follow any of the quantum approaches to measurement -- most of them random phase approximation or master equation projections that assume that the filter forces a final state on the basis of its local entropy and unknown/unspecified state -- it cannot independently conclude that the spin of this electron is down -- the measurement will definitely be up -- because in frame A the measurement of spin 1 has already happened. In no possible sense can the measurement of spin 2 in frame B in the up state "cause" spin 1 to be in a state that -- independent of the state of its measurement apparatus -- will definitely be measured as spin down. Otherwise you have (in frame A) to accept the truth of the statement that a future measurement of the state of spin 2 is what determines the outcome of the present measurement of the state of spin 1. Oooo, bad. The problem, as you can see, is that relativity theory puts some very stringent limits on what we can possibly mean by the word "cause". They pretty much completely exclude any possible way that the statement "measuring spin 1 causes the 1-2 entangled wavefunction to collapse" can have frame-invariant meaning, and meaning that isn't inertial frame invariant in a relativistic universe isn't, that is, it is meaningless. We can only conclude that the correlated outcomes of the measurements was not determined by the local entropy state of the measurement apparatus at the time of the measurements. Fortunately, we have one more tool to help us understand the outcome. Physics is symmetric in time. Indeed, our insistence on using retarded vs advanced or stationary (Dirac) Green's functions to describe causal interactions is entirely due to our psychological/perceptual experience of an entropic arrow of time, where entropy is strictly speaking the log of the missing/lost/neglected information in any macroscopic description of a microscopically reversible problem. That's the reason the Generalized Master Equation approach is so enormously informative. It starts with the entire, microscopically reversible Universe, which is all in a definite quantum entangled state with nothing outside of it to cause it to "collapse". In this "God's Eye" description, there is just a Universal wavefunction or density operator for a few gazillion particles with completely determined phases, evolving completely reversibly in time, with zero entropy. One then takes some subsystem -- say, an innocent pair of unsuspecting electrons -- and forms the e.g. 2x2 submatrix describing their mutually coupled state. Note well that both spins are coupled to every other particle in the Universe at all times -- this submatrix is "identified", not really created or derived, within the larger universal density matrix, and things like rows and columns can be permuted to (without loss of generality) bring it to the upper left hand corner where it becomes the "system". The submatrix for everything else (not including coupling to the spins) is similarly identified. Nakajima-Zwanzig construction treats this second submatrix statistically because we cannot know or measure the general state of the Universe and have a hard enough time measuring/knowing the state of the 2x2 submatrix we've identified as an "entangled system". It projects the entirety of "everything else" into diagonal probabilities (by e.g. a random phase approximation, making the entropy of the rest of the Universe classical entropy) and then treats the interaction of these diagonal objects with the spins as being weak enough to be ignored, usually, except of course when it is not. It is not when e.g. the spins emit or absorb photons from the rest of the Universe (virtual or otherwise) while interacting with a measuring apparatus or the apparatus that prepared the spins. Because we cannot track the actual fully entangled phases of all the interactions in this enormous submatrix and with the submatrix and the system, the best we can manage is this semiclassical interaction that takes entropy from "the bath" (everything else) and bleeds it statistically into "the system". In this picture (which should again be geometrically relativistic) there was never any question as to the outcome of the "measurement" of the entangled spin state by the remotely separated apparati, and furthermore, while the NZ equation is not reversible, we can fully appreciate the fact that if we time reverse the actual density matrix it approximates, the two electrons will leap out of the measuring apparatus, propagate backwards in time, and form the original supposedly quantum entangled state because it never left it -- it was/is/will be entangled with every particle that makes up the measuring apparatus that would eventually "collapse" its wavefunction over the entire span of time. Note that in this description there is no such thing as wavefunction collapse, not really. That whole idea is neither microreversible nor frame invariant. It describes the classical process of measurement of a quantum object, where the measuring apparatus is not treated either relativistically correctly or as a fully coupled quantum system in in a collectively definite state in its own right. It isn't surprising that it leads to paradoxes and hence silly statements that don't really describe what is going on. This is a more detailed discussion of the very apropos comment above that similarly resolves Schrodinger's Cat -- the cat cannot be in a quantum superposition of alive and dead because every particle in the cat and the quantum decaying nucleus that triggers the infernal device is never isolated from every other particle in the Universe. The cat gives off thermal radiation as it is alive that exchanges information and entropy with the walls of the death chamber, which interact thermally with the outside. The instant the cat dies, there is a retarded propagation of the altered trajectories of all of its particles communicated to the outside Universe of coupled particles, which were in turn communicating/interacting with all of the particles that make up "the cat" and with the nucleus itself and with the detector and with the poisoning device both before, during, and after all changes. the changes never occur in the "isolation" we approximate and imagine to simplify the problem. Hope this helps. rgb about 3 months ago top The Energy Saved By Ditching DVDs Could Power 200,000 Homes Re:Nice try cloud guys (339 comments) Although I don't want to get into the specific definition of "cloud" vs "cluster" vs "virtualized service server" etc -- with the understanding that perhaps it is a definition in flux along with the underlying supporting software and virtualization layers and hence will be hard to pin down and hence easy to argue fruitlessly about -- I agree with all of this. A major point of certain kinds of clustering software from Condor on down has been maintaining a high duty cycle on otherwise fallow resources that you've paid for already, that have to be plugged in all the time to be available for critical work anyway, that burn some (usually substantial) fraction of their load energy in idle mode waiting for work, and that depreciate and eventually are phased out by e.g. Moore's Law after 3-5 years in many cases even though they aren't broken and are perfectly capable of doing work. Software like Condor lets even desktops be part of a local "cloud" that can be running background jobs that don't really interfere with interactive response time much but that keep the duty cycle of the hardware very close to 100% instead of the 5-8% a mostly-idle desktop might be (while still burning half or even 3/4 of the energy it burns when loaded). So it really isn't all about carbon (except insofar as energy (carbon based or not) costs money). It's about money, and some of the money is linked to the use of carbon. High duty cycle utilization of resources is economically much more efficient. That's why businesses like to use it. It's often cheaper to scavenge free cycles from resources you already have than it is to build dedicated resources that might end up sitting idle much of the time. The catch, however, is systems management. In many cases, the biggest single cost of setting up ANY sort of distributed computing environment is human. A single sysadmin capable of setting up serious clustering and managing virtualized resources could easily be six figures per year, and that could easily exceed the cost of the resources themselves (including the energy cost) for a small to medium sized company. All too often, the systems management that is available is of questionable competence, as well, which further complicates things. Virtualization in the cloud can at least help address some of these issues too, as one shares high end systems management people and high end software resources across a large body of users and hence get much better scale economy IF you can afford enough competence locally to get your tasks out there into the cloud in the first place and still satisfy corporate rules for due diligence, data integrity and security, and so on. However, be aware that for all of the advantages of distributed computing, there are forces, market and otherwise, that push against it. I buy a license for some piece of mission critical (say accounting) software, and that license usually restricts it to run on a single machine. If I put it on a virtual machine and run it on many pieces of hardware (but on only one machine at a time) I'm probably violating the letter of the law, and the company that sold the software has at least some incentive to hold me to the letter so they can sell me a license for every piece of hardware I might end up running a virtualized instance upon. Correctly licensing stuff one plans to run "in the cloud", then, is a bit of a nightmare -- if you care about that sort of thing. If one is a business, this can be a real (due diligence sort of) issue. Which brings us full circle back to the top article. There are ever so many things that would be vastly more efficient "in the cloud" or just "run from an internet and distributed servers" as a more general version of the same thing. Netflix, sure, but how about paper newspapers? Every day, they require literally tons of paper per locality, cubic meters of ink, enough electricity to power a small manufactory, transportation fuel for the workers that cut the trees, the trees as they go to the paper mill, the fuel that carries the paper to the newspapers, and finally the fuel needed to deliver the newspaper to the houses that receive them and as the final insult, the fuel needed to pick up the mostly unread newsprint and cart it off to "recycle" (which may save energy compared to cutting trees, but costs energy compared to not having newspapers at all). Compare that to the marginal cost of storing an image of the same informational content on a server with sufficient capacity and distributing that replicated image to a household. The newspaper costs order of a dollar a day to deliver. The image of the newspaper costs such a thin fraction of a single cent to deliver that the only reason to charge for an online paper or news service at all is to pay the actual reporters and editors that assemble the image. Compare the cost of delivering paper mail to email. Compare the cost of driving out to "shop' vs shopping online. The world hasn't even begun to realize the full economic benefits of the ongoing informational/communication revolution. And sure, some of the benefit can be measured in terms of "saving fuel/energy resources" (including ones based on carbon, but even if the electricity I use or that is used in steps that are streamlined or eliminated comes from a nuclear power plant it costs money just the same). Personally, I don't worry as much about "carbon" utilization reduction as I do about poverty and improved standards of living worldwide (which I think is by far the more important priority) but network based efficiencies accomplish both nicely. rgb about 3 months ago top I Want a Kindle Killer I'm not sure I understand... (321 comments) ...since an e-book reader is software, not hardware. I read my (many) Kindle books on my old-gen Kindle, on my wife's ipad, on my own galaxy tab 2, on my wife's rooted Nook, on my android cell phone (but not usually, too small print), on my computer(s). In other words, it is pretty easy to get a free Kindle book reader for many (maybe even most) platforms and hook it into your library. I'm not sure what "advantages" a Kindle per se is over any of these platforms, either for reading books or for playing Android games or for doing work of various sorts. My Galaxy is pretty awesome for the purposes I put it to -- reading books (mostly Kindle books, sometimes Google books, not infrequently free epub/mobi books), playing Sudoku, playing any of the other dozen or so well-done games I've invested in so far (some for free, some cheap, some "expensive" at$7 or \$9 each), rarely browsing the web, doing email, etc. Rarely because I prefer to attach a bluetooth keyboard if I'm going to do keyboard-based work, but I have other computers that are better suited for most of that.

So I don't get the "Kindle Killer" comment. You mean something better than the Kindle as an Android platform? Lots of choices -- Samsung Galaxy is arguably better in nearly any dimension, for example, and many people have pointed out that the rooted Nook is a nice cheap choice (and would be a "good" choice if Barnes and Noble got their head out of their rear and didn't force one to root it to be able to install arbitrary Android apps from the Android store). And then there is the iPad -- which is a lovely little piece of hardware whatever you think of drinking the Apple-ade. There is the Surface -- personally I won't get one both because I still have a bit of Evil Empire problem with Microsoft and because it is expensive as all hell compared to anything but a full-feature iPad. I've looked at a bunch of the other Android Tabs in the stores, and none of them really suck, although some are arguably better than others. Many are cheaper than the Kindle and don't have Kindle's anti-Google thing going (although the Kindle is reportedly better than the Nook in that regard, but perhaps not by much).

If you mean kindle SOFTWARE killer -- then I truly don't understand your comment. A better version of the existing Kindle book reader? A third party reader (unlikely, given proprietary stuff)?

A hammer?

rgb

top

The Big Bang's Last Great Prediction

Re:Actually, a really nice article... (80 comments)

Unless I've missed something crucial here. But perhaps we'll have a breaktrough in accelerator technology that will let us reach these levels at some point. If we hit the resonance, the scattering rate will be of the order 1e-31, 13 orders of magnitude higher than what I used in my back-of-the-envelope calculation. But we aren't likely to hit those energies soon, I think.

Oops (blush). I haven't done relativistic kinematics for a very long while either, but I forgot about momentum conservation altogether. And here I am teaching undergrads about inelastic collisions...

Well darn. It looks like it could borderline work in the sense of produce events every month or even more if one could get TeV electrons at beam currents of order 1 ampere, but you're right, getting to the PeV resonance will be, err, difficult. OK, so probably not worth rebuilding SLAC for.

As for muon catalyzed fusion, Larry's last idea on the subject was politically incorrect but intriguing. He suggested using it as an energy boost second stage gleaning muons from fission reactors. But even then (20+ years ago) fission reactors were politically incorrect and there wasn't a lot of enthusiasm for the idea. I never worked out the math (I assume he did, somewhere) to see if there were enough muons per fission and enough fusions per muon to get a significant gain in net nuclear fuel yield, though.

top

The Big Bang's Last Great Prediction

Re:Actually, a really nice article... (80 comments)

Interesting article. Things really do get complicated at those energy scales...:-)

They're using Cerenkov detectors, though, for very very high energy events. I wonder how sensitive they are to muons with much lower energies. The scales on the figures in the article, for example, don't actually go down to 100 GeV -- the left hand edge (log scale) appears to be 1 TeV. But the cross sections are indeed pretty small and it is difficult to get rates to rise above the background cosmic ray muon flux (which I actually measured, once upon a time back in the 70's before I became a theorist:-).

I suppose the only unresolved question would be whether or not there is a narrow but strong electron-electron neutrino resonance around the rest mass of the W-. The collision volume (even summed over the length of the beam column in e.g. a SLAC-like pipe) is quite small, but SLAC is apparently capable of generating 1/2 an ampere of beam current. That's basically 10^19 electrons/second, which knocks five orders of magnitude off your estimate of 1 event per 300000 years to one per 3 years. That seems as though it is low enough that IF there were any sort of actual resonance, it might knock another order of magnitude off and get one at least several events per year, maybe more. Nobel prizes have been won with little more...:-)

Is the seat of the pants estimate sufficient it to propose doubling SLAC's peak energy and current and designing a custom beanding magnet at the end of a long otherwise empty beam pipe to resolve resonant muons from background? Maybe not, but it might be part of a proposal that included other experiments (including a revival of its experiments to search for Higgs, for example) that might benefit from a substantially beefed up beam.

Of course the REALLY cool way to do this would be to do it either on the moon or at one of the Lagrange points -- someplace where 100 km beams don't require either Earth-expensive real estate or tunnels or pipes, and where solar energy could provide a gigawatt of "free" power once you built the facility. Really cool and awesomely expensive, but imagine using the polar moon to build a ring and the equatorial moon to build a "linear" (great circle curvature) accelerator. Even a theorist can appreciate that...

Imagine doing an experiment to scatter electrons off of thermal big bang photons, for example, doppler shifted up to GeV scales. Which is similarly difficult, actually.

I once upon a time fantasized about creating some sort of "wiggler" in the electroweak interaction that could resonantly convert electrons into muons along the lines of the way FELs create a "virtual photon" in the electron rest frame. If one could ever make this sort of thing efficient enough, one could revisit the issue of muon-catalyzed fusion and maybe do an end run around thermal confinement problems. My Ph.D. advisor (Larry Biedenharn) spent a decade or so looking hard at muon catalyzed fusion so I learned a lot about it then even though my research was in completely different stuff). The primary block point was the huge cost per muon to create muons via e.g. nuclear cross sections and pion decay. If one could ever short circuit that, the issue would be worth revisiting even with the other problems, just for the pleasure of the physics...

rgb

top

The Big Bang's Last Great Prediction

Re:Actually, a really nice article... (80 comments)

Thanks, you are probably right -- as I said, I wasn't doing the math, but was just thinking that an accelerated beam IS a rapidly moving detector. I was also assuming that it was the lack of collision frame energy in the huge neutrino detectors that was the limiting factor in detecting thermal neutrinos -- to create a W boson requires order of 100 GeV, and of course this just isn't available (outside of Heisenberg uncertainty and extremely suppressed virtual processes) which mutually thermal atoms and neutrinos. But creating a 100 GeV/c^2 electron beam has actually been done (LEP) specifically to enable the creation of the heavy vector bosons in particle/antiparticle collisions (which peaked out around 209 GeV/c^2). I would have expected the thermal neutrino cross-section to take a dramatic uptick once the frame energy was sufficient to actually enable the direct process -- even in the huge detectors in use a major problem is that the neutrinos coming in don't have ~100 GeV/c^2 in the collision frame, right?

I think LEP was shut down and its tunnel re-purposed into the LHC, and I'm guessing the LHC can't be used to accelerate a lepton beam without basically rebuilding it, so there may be no machine currently in existence that could do this anyway (I should have thought more carefully about the collision frame issue and the W rest mass when I suggested SLAC or the FEL could do this -- they are still well below the threshold for producing W's from thermal neutrinos (SLAC is close at 50 GeV/c^2). But if the cross section issue IS lack of frame energy as opposed to probability of encounter (as seems likely -- there are going to be plenty of close encounters between electrons and neutrinos in a long run even with a comparatively low neutrino density, but one has to have at least enough energy to create a W for at least enough time to make it a virtual channel for the final muon and antineutrino. I'm guessing that we don't have measurements for the cross section at these frame energies (unless there is data from LEP somewhere), but the possibility of a resonance or cross-section spike once the W threshold is passed is hardly unreasonable.

top

This Is Your Brain While Videogaming Stoned

... even if N = 4 in a non-double-blind placebo-controlled experiment isn't exactly science. But a lovely anecdote, and one that matches my own personal experience (from long ago) in many ways. I hope people are actually reading TFA -- but wait, this is /. so that's a silly thought.

top

Trillions of Plastic Pieces May Be Trapped In Arctic Ice

Re:It didn't take long to leave our mark in the se (136 comments)

I gotta say, are we talking about the same Parthenon? The one built at the top of a hill overlooking Athens as pretty nearly the sole structure on the hilltop?

It doesn't precisely show the elevations, but:

is one view, or perhaps this will do better:

http://www.greatbuildings.com/...

As you can see, it is pretty much on top of a mesa. So I'm not sure where your "slight dip in the terrain" could possibly be.

I only point this out not because your argument is implausible in general, but your specific example is one of my favorite places on Earth and although I've only been fortunate enough to visit it in person twice in my lifetime (so far) I remember the walk up from Athens proper quite well, including stopping in some of the many small taverns that are along the trek for octopus and retsina.

rgb

top

The Big Bang's Last Great Prediction

Actually, a really nice article... (80 comments)

That was really lovely, and thank you for posting it.

You assert that one problem with detection is the difficulty of accelerating entire neutrino detectors to GeV energy scales. I'm not sure that I agree. Muons, as we know, decay into electrons and two kinds of neutrino/antineutrino. Electrons moving at GeV scales have more than enough energy to be transformed into muons in the inverse reaction -- if they happen to hit an electron antineutrino -- or more properly, they have a chance to be transformed into a W- boson which can then decay into several things -- lepton/neutrino pairs or quark pairs, one of which produces muons

Muons are easy to detect. Electrons with "suddenly" shifted energy are also easy to detect (another possible outcome). Finally, quark-antiquark "jets" are easy to detect.

At the densities of thermal neutrinos asserted, it seems reasonably probable (without, admittedly, doing the computation) that GeV scale electrons will encounter free neutrinos and undergo the inverse reaction and produce muons along a freely moving beam track and indeed that places like SLAC and the Duke FEL would be producing a small but detectable flux of muons all along the straight legs of their beams that would then either exit sideways (where they could be detected lots of ways) or continue along the collision frame of reference and be moderately separable at the next bending magnet. Yes, there would likely be some auxiliary production near the actual beam from electron collisions with beam pipe metal outside of the beam envelope, but one would expect to be able to put a vacuum pipe along the frame of reference of the collision a kilometer long or thereabouts PAST a a bending magnet (at the right angle) at the end of a long straight leg and run it into a detector, which would then detect all/mostly muons produced by neutrino scattering. Or so it seems.

Is this wrong?

rgb

Slashdot: News for Nerds

Never invest your money in anything that eats or needs repainting. -- Billy Rose

Need an Account?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

• b
• i
• p
• br
• a
• ol
• ul
• li
• dl
• dt
• dd
• em
• strong
• tt
• blockquote
• div
• quote
• ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>