×

Announcing: Slashdot Deals - Explore geek apps, games, gadgets and more. (what is this?)

Thank you!

We are sorry to see you leave - Beta is different and we value the time you took to try it out. Before you decide to go, please take a look at some value-adds for Beta and learn more about it. Thank you for reading Slashdot, and for making the site better!

Comments

top

Researchers Design DNA With New Shapes and Structures

kebes Re:This should be a given.. (47 comments)

The base-pair sequence of DNA determines its biological function. As you say, this sequence determines what kinds of proteins get made, including their exact shape (and more broadly how they behave).

But TFA is talking about the conformation (shape) of the DNA strand itself, not the protein structures that the DNA strand is used to make.

In living organisms, the long DNA molecule always forms a double-helix, irrespective of the base-pair sequence within the DNA. DNA double helices do actually twist and wrap into larger-scale structures: specifically by wrapping around histones, and then twisting into larger helices that eventually form chromosomes. There are hints that the DNA sequence itself is actually important in controlling how this twisting/packing happens (with ongoing research about how (innapropriately-named) "junk DNA" plays a crucial role). However, despite this influence between sequence and super-structure, DNA strands essentially are just forming double-helices at the lowest level: i.e. two complementary DNA strands are pairing up to make a really-long double-helix.

What TFA is talking about is a field called "DNA nanotechnology", where researchers synthesize non-natural DNA sequences. If cleverly designed, these sequences will, when they do their usual base-pairing, form a structure more complex than the traditional "really-long double-helix". The structures that are designed do not occur naturally. People have created some really complex structures, made entirely using DNA. Again, these are structures made out of DNA (not structures that DNA generates). You can see some examples by searching for "DNA origami". E.g. one of the famous structures was to create a nano-sized smiley face; others have 3D geometric shapes, nano-boxes and bottles, gear-like constructs, and all kinds of other things.

The 'trick' is to violate the assumptions of DNA base-pairing that occur in nature. In living cells, DNA sequences are created as two long complementary strands, which pair up with each other. The idea in DNA nanotechnology is to create an assortment of strands. None of the strands are perfectly complementary to each other, but 'sub-regions' of some strands are complementary to 'sub-regions' on other strands. As they start pairing-up with each other, this creates cross-connections between all the various strands. The end result (if your design is done correctly) is that the strands spontaneously form a ver well-defined 3D structure, with nanoscale precision. The advantage of this "self-assembly" is that you get billions of copies of the intended structure forming spontaneously and rapidly. Very cool stuff.

This kind of thing has been ongoing since 2006 at least. TFA erroneously implies that this most recent publication invented the field. Actually, this most recent publication is some nice work about how the design process can be made more robust (and software-automated). So, it's a fine paper, but certainly not the first demonstration of artificial 3D DNA nano-objects.

about two weeks ago
top

Ask Slashdot: How Do You Sort?

kebes Non-deterministic sort (195 comments)

Human sorting tends to be rather ad-hoc, and this isn't necessarily a bad thing. Yes, if someone is sorting a large number of objects/papers according to a simple criterion, then they are likely to be implementing a version of some sort of formal searching algorithm... But one of the interesting things about a human sorting things is that they can, and do, leverage some of their intellect to improve the sorting. Examples:
1. Change sorting algorithm partway through, or use different algorithms on different subsets of the task. E.g. if you are sorting documents in a random order and suddenly notice a run that are all roughly in order, you'll intuitively switch to a different algorithm for that bunch. In fact, humans very often sub-divide the problem at large into stacks, and sub-sort each stack using a different algorithm, before finally combining the result. This is also relevant since sometimes you actually need to change your sorting target halfway through a sort (when you discover a new category of document/item; or when you realize that a different sorting order will ultimately be more useful for the high-level purpose you're trying to achieve; ...).
2. Pattern matching. Humans are good at discerning patterns. So we may notice that the documents are not really random, but have some inherent order (e.g. the stack is somewhat temporally ordered, but items for each given day are reversed or semi-random). We can exploit this to minimizing the sorting effort.
3. Memory. Even though humans can't juggle too many different items in their head at once, we're smart enough that we encounter an item, we can recall having seen similar items. Our visual memory also allows us to home-in on the right part of a semi-sorted stack in order to group like items.

The end result is a sort that is rather non-deterministic, but ultimately successful. It isn't necessarily optimal for the given problem space, but conversely their human intellect is allowing them to generate lots of shortcuts during the sorting problem. (By which I mean, a machine limited to paper-pushing at human speed, but implementing a single formal algorithm, would take longer to finish the sort... Of course in reality mechanized/computerized sorting is faster because each machine operation is faster than the human equivalent.)

about 10 months ago
top

Landmark Calculation Clears the Way To Answering How Matter Is Formed

kebes Re:Just another step closer... (205 comments)

You make good points. However, I think you're somewhat mischaracterizing the modern theories that include parallel universes.

So long as we use the real physicists definitions and not something out of Stargate SG1, those parallels will always remain undetectable. SF writers tell stories about interacting with other universes - physicists define them in ways that show they can't be interacted with to be verified.

(emphasis added) Your implication is that physicists have invented parallel universes, adding them to their theories. In actuality, parallel realities are predictions of certain modern theories. They are not axioms, they are results. Max Tegmark explains this nicely in a commentary (here or here). Briefly: if unitary quantum mechanics is right (and all available data suggests that it is), then this implies that the other branches of the wavefunction are just as real as the one we experience. Hence, quantum mechanics predicts that these other branches exist. Now, you can frame a philosophical question about whether entities in a theory 'exist' or whether they are just abstractions. But it's worth noting that there are plenty of theoretical entities that we now accept as being real (atoms, quarks, spacetime, etc.). Moreover, there are many times in physics where, once we accept a theory as being right, we accept its predictions about things we can't directly observe. Two examples would be: to the extent that we accept general relativity as correct, we make predictions about the insides of black holes, even though we can't ever observe those areas. To the extent that we accept astrophysics and big-bang models, we make predictions about parts of the universe we cannot ever observe (e.g. beyond the cosmic horizon).

An untestable idea isn't part of science.

Indeed. But while we can't directly observe other branches of the wavefunction, we can, through experiments, theory, and modeling, indirectly learn much about them. We can have a lively philosophical debate about to what extent we are justified in using predictions of theories to say indirect things are 'real' vs. 'abstract only'... but my point is that parallel realities are not alone here. Every measurement we make is an indirect inference based on limited data, extrapolated using a model we have some measure of confidence in.

Occam's Razor ...

Occam's Razor is frequently invoked but is not always as useful as people make it out to be. If you have a theory X and a theory X+Y that both describe the data equally well, then X is better via Occam's Razor. But if you're comparing theories X+Y and X+Z, it's not clear which is "simpler". You're begging the question if you say "Clearly X+Y is simpler than X+Z! Just look at how crazy Z is!" More specifically: unitary quantum mechanics is arguably simpler than quantum mechanics + collapse. The latter involves adding an ad-hoc, unmeasured, non-linear process that has never actually been observed. The former is simpler at least in description (it's just QM without the extra axiom), but as a consequence predicts many parallel branches (it's actually not an infinite number of branches: for a finite volume like our observable universe, the possible quantum states is large but finite). Whether an ad-hoc axiom or a parallal-branch-prediction is 'simpler' is debatable.

Just about any other idea looks preferrable to an idea that postulates an infinite number of unverifiable consequents.

Again, the parallel branches are not a postulate, but a prediction. They are a prediction that bother many people. Yet attempts to find inconsistencies in unitary quantum mechanics so far have failed. Attempts to observe the wavefunction collapse process have also failed (there appears to be no limit to the size of the quanum superposition that can be generated). So the scientific conclusion is to accept the predictions of quantum mechanics (including parallel branches), unless we get some data that contradicts it. Or, at the very least, not to dismiss entirely these predictions unless you have empirical evidence against either them or unitary quantum mechanics itself.

more than 2 years ago
top

Wozniak Calls For Open Apple

kebes Re:Can't have it both ways (330 comments)

I disagree. Yes, there are tensions between openness/hackability/configurability/variability and stability/manageability/simplicity. However, the existence of certain tradeoffs doesn't mean that Apple couldn't make a more open product in some ways without hampering their much-vaunted quality.

One way to think about this question to analyze whether a given open/non-open decision is motivated by quality or by money. A great many of the design decisions that are being made are not in the pursuit of a perfect product, but are part of a business strategy (lock-in, planned obsolescence, upselling of other products, DRM, etc.). I'm not just talking about Apple, this is true very generally. Examples:
- Having a single set of hardware to support does indeed make software less bloated and more reliable. That's fair. Preventing users from installing new hardware (at their own risk) would not be fair.
- Similarly, having a restricted set of software that will be officially supported is fine. Preventing any 'unauthorized' software from running on a device a user has purchased is not okay. The solution is to simply provide a checkbox that says "Allow 3rd party sources (I understand this comes with risks)" which is what Android does but iOS does not.
- Removing seldom-used and complex configuration options from a product is a good way to make it simpler and more user-friendly. But you can easily promote openness without making the product worse by leaving configuration options available but less obvious (e.g. accessed via commandline flags or a text config file).
- Building a product in a non-user-servicable way (no screws, only adhesives, etc.) might be necessary if you're trying to make a product extremely thin and slick.
- Conversely, using non-standard screws, or using adhesives/etc. where screws would have been just as good, is merely a way to extract money from customers (forcing them to pay for servicing or buy new devices rather than fix old hardware).
- Using bizarre, non-standard, and obfuscated file formats or directory/data-structures can in some cases be necessary in order to achieve a goal (e.g. performance). However in most cases it's actually used to lock-in the user (prevent user from directly accessing data, prevent third-party tools from working). E.g. the way that iPods appear to store the music files and metadata is extremely complex, at least last time I checked (all files are renamed, so you can't simply copy files to-and-from the device). The correct solution is to use open formats. In cases where you absolutely can't use an established standard, the right thing to do is to release all your internal docs so that others can easily build upon it or extend it.

To summarize: yes, there are cases where making a product more 'open' will decrease its quality in other ways. But, actually, there are many examples where you can leave the option for openness/interoperability without affecting the as-sold quality of the product. (Worries about 'users breaking their devices and thus harming our image' do not persuade; the user owns the device and ultimately we're talking about experience users and third-party developers.) So, we should at least demand that companies make their products open in all those 'low-hanging-fruit' cases. We can then argue in more detail about fringe cases where there is really a openness/quality tradeoff.

more than 2 years ago
top

Gamma-Ray Bending Opens New Door For Optics

kebes Re:n = 1.000000001 (65 comments)

I'm somewhat more hopeful than you, based on advances in x-ray optics.

For typical x-ray photons (e.g. 10 keV), the refractive index is 0.99999 (delta = 1E-5). Even though this is very close to 1, we've figured out how to make practical lenses. For instance Compound Refractive Lenses use a sequence of refracting interfaces to accumulate the small refractive effect. Capillary optics can be used to confine x-ray beams. A Fresnel lens design can be used to decrease the thickness of the lens, giving you more refractive power per unit length of the total optic. In fact, you can use a Fresnel zone plate design, which focuses the beam due to diffraction (another variant is a Laue lens which focuses due to Bragg diffraction, e.g. multilayer Laue lenses are now being used for ultrahigh focusing of x-rays). Clever people have even designed lenses that simultaneously exploit refractive and diffractive focusing (kinoform lenses).

All this to say that with some ingenuity, the rather small refractive index differences available for x-rays have been turned into decent amounts of focusing in x-ray optics. We have x-rays optics now with focal lengths on the order of meters. It's not trivial to do, but it can be done. It sounds like this present work is suggesting that for gamma-rays the refractive index differences will be on the order of 1E-7, which is only two orders-of-magnitude worse than for x-rays. So, with some additional effort and ingenuity, I could see the development of workable gamma-ray optics. I'm not saying it will be easy (we're still talking about tens or hundreds of meters for the overall camera)... but for certain demanding applications it might be worth doing.

more than 2 years ago
top

IBM Creates MRI With 100M Times the Resolution

kebes High resolution but small volume (161 comments)

The actual scientific paper is:
C. L. Degen, M. Poggio, H. J. Mamin, C. T. Rettner, D. Rugar Nanoscale magnetic resonance imaging PNAS 2009, doi: 10.1073/pnas.0812068106.

The abstract:

We have combined ultrasensitive magnetic resonance force microscopy (MRFM) with 3D image reconstruction to achieve magnetic resonance imaging (MRI) with resolution <10 nm. The image reconstruction converts measured magnetic force data into a 3D map of nuclear spin density, taking advantage of the unique characteristics of the 'resonant slice' that is projected outward from a nanoscale magnetic tip. The basic principles are demonstrated by imaging the 1H spin density within individual tobacco mosaic virus particles sitting on a nanometer-thick layer of adsorbed hydrocarbons. This result, which represents a 100 million-fold improvement in volume resolution over conventional MRI, demonstrates the potential of MRFM as a tool for 3D, elementally selective imaging on the nanometer scale.

I think it's important to emphasize that this is a nanoscale magnetic imaging technique. The summary implies that they created a conventional MRI that has nanoscale resolution, as if they can now image a person's brain and pick out individual cells and molecules. That is not the case! And that is likely to never be possible (given the frequencies of radiation that MRI uses and the diffraction limit that applies to far-field imaging.

That having been said, this is still a very cool and noteworthy piece of science. Scientists use a variety of nanoscale imaging tools (atomic force microscopes, electron microscopes, etc.), but having the ability to do nanoscale magnetic imaging is amazing. In the article they do a 3D reconstruction of a tobacco mosaic virus. One of the great things about MRI is that is has some amount of chemical selectivity: there are different magnetic imaging modes that can differentiate based on makeup. This nanoscale analog can use similar tricks: instead of just getting images of surface topography or electron density, it could actually determine the chemical makeup within nanostructures. I expect this will become a very powerful technique for nano-imaging over the next decade.

more than 5 years ago
top

Saving Energy Via Webcam-Based Meter Reading?

kebes Orientation analysis in an image (215 comments)

The image analysis question is interesting. You are trying to read dial positions, so conventional OCR is probably useless (unless there is a package to do exactly that?).

What you can do is use image processing commands (in your favorite programming language; a shell script, Python, etc.) to crop the image to generate a small image for each dial. Then convert to grayscale (and maybe increase the contrast to highlight the dial). To then calculate the preferred orientation in the image, you calculate gradients along different directions. There will be a much higher value for the gradient along directions perpendicular to the preferred axis. This procedure is described very briefly in this paper:
Harrison, C.; Cheng, Z.; Sethuraman, S.; Huse, D. A.; Chaikin, P. M.; Vega, D. A.; Sebastian, J. M.; Register, R. A.; Adamson, D. H. "Dynamics of pattern coarsening in a two-dimensional smectic system" Physical Review E 2002, 66, (1), 011706. DOI: 10.1103/PhysRevE.66.011706

This is easiest to do if you use a graphics package that has directional gradients built-in (but coding it yourself probably wouldn't be too hard). Basically you create copies of the image and on one you do a differentiation in the x-direction, and for the other one a differentiation in the y-direction. Let's call these images DIFX and DIFY. Then you compose two new images:
NUMERATOR = 2*DIFX*DIFY
DENOMINATOR = DIFX^2-DIFY^2

Then you calculate a final image:
ANGLES = atan2( NUMERATOR, DENOMINATOR )

(All the above calculations are done in a pixel-by-pixel mode.) The final image will have an angle map (with values between -pi to pi) for the image. It should be easy to then use the avg or max over that image to pull out the preferred direction. You may also improve results by tweaking the initial thresholding, or by adding an initial "Sharpen Edges" step, or by blurring the NUMERATOR and DENOMINATOR images slightly before doing the next step.

In any case, the above procedure has worked for me when coding image analysis for orientation throughout an image (coding was done in Igor Pro in my case). So maybe it is useful for you.

more than 6 years ago
top

How Regulations Hamper Chemical Hobbyists

kebes Re:while historical chemical advances (610 comments)

As a chemist and practicing scientist, I can attest to the phenomenal costs of doing modern science (much of which comes from safety regulations, and associated "certified" equipment). So I do agree that it is very difficult in the modern age for a hobbyist in their garage to make a groundbreaking discovery... That having been said, i think there are many reasons why hobbyist chemistry (and hobbyist science in general) is a good thing:

1. The combinatorial space in science (and in the production of chemicals especially) is absolutely massive. There is no practical way for chemists to explore it all, so of course they make educated guesses about what is both (a) reasonably easy to make; and (b) of some practical value. However because the combinatorial space is large, there is still plenty of uncharted territory for others to explore. Random fortuitous discoveries are certainly a part of science.

2. Hobbyists can afford to do research that is risky and has no obvious application (I mean "risky" in the sense of "it might not work or lead anywhere" and not in the sense of "it might be dangerous"). They don't have to satisfy funding agencies or pragmatic concerns. They can just explore. Thus they can sometimes pursue crazy lines of inquiry that established scientists wouldn't touch.

3. There is such a thing as having your creativity inhibited by institutionalized concepts. A hobbyist isn't as restricted by the "well-established-rules" of the field, and thus may make creative discoveries others would have missed. (This is rare, by the way: the vast majority of science comes from pushing along using well-established procedures and concepts... but rare "out of the box" discoveries are also important in science.)

4. Doing chemistry (or science in general) on a budget, using only commonly-available equipment, can actually force specific kinds of discoveries. Specifically, it helps to discover things that are cheap (which industry loves!) since it can be done with commodity chemicals and tools. (Who knows, there may be a cheap way to make a better antifreeze using only what is in your house and back-yard.) So hobbyists actually have a chance to discover things that will actually make an impact on industry (whereas the chance that they discover something fundamentally new, without modern diagnostic tools, is slimmer).

5. Finally, even if the hobbyist doesn't actually discover anything new or interesting (which is, by far, the most likely outcome), it has a positive effect on the participants. The people doing it are doing so for fun (presumably), and that in itself is reason enough. Moreover it may be the catalyst for someone to go into science professionally. The ability to make kids enthusiastic about science should not be overlooked. Like most hobbies, hobby-science is more about the process than the end result.

more than 6 years ago
top

How Regulations Hamper Chemical Hobbyists

kebes Bad example... (610 comments)

As a chemist, I definitely like the idea of hobby chemists, and/or home laboratories. People should be free to do science at home if they are so inclined. But this is in some sense a bad example:

Charles Goodyear figured out how to vulcanize rubber with the same stove that his wife used to bake the family's bread.

You should never use the same equipment for your chemistry as for your other household things. If you're going to do chemistry at home, do it safely. This means having a separate (well-ventilated) room for your work, and using separate ovens, microwave, glassware, and other equipment for your work. Chemical contamination is a real threat. You may look at a chemical reaction and deem all the reactants and products to be safe... but if you make a mistake you may contaminate a room/oven/glassware with a more dangerous side-product. And you do not want to be then ingesting these contaminants (worse, you do not want to expose your family and friends).

So, like I said, be safe and use dedicated equipment for your experiments. (And don't brush your teeth with the toothbrush you use to clean your test tubes.)

more than 6 years ago

Submissions

top

kebes kebes writes  |  more than 7 years ago

kebes (861706) writes "According to the official QEMU site, the QEMU accelerator module, KQEMU, has just been released under the open-source GPL license. QEMU is a cross-platform processor emulator, allowing you to virtualize an entire PC. The KQEMU module allows significatn virtualization speedup when emulating an x86 processor on x86 hardware. The module was previously available as a binary-only add-on to the open-source QEMU. This recent relicensing makes QEMU a fully open-source, high-speed virtualization tool available to all."

Journals

top

Nearing critical mass for funding open nVidia drivers

kebes kebes writes  |  more than 7 years ago I was just made aware of a pledge drive attempting to raise $10,000 USD to help fund a project working on open-source nVidia drivers. This would provide full 3D-acceleration to open-source operating systems, and could potentially sidestep the persistent debate between "partial support with open-source drivers" vs. "full support with proprietary drivers." It appears that the organizers intend to use the FSF as an intermediary to transfer the funds. They are already more than 3/4 of the way to their goal. Is this a viable model for future software development? Will companies like nVidia take notice?

top

Study finds that unconscious makes good decisions

kebes kebes writes  |  more than 8 years ago This is a story submission that I made yesterday (that was rejected). Here's the summary:

According to an article published today in Science Magazine, overthinking important decisions is counter-productive. The study found that for unimportant decisions (such as buying shampoo), deliberation would yield useful results (people pick the best product), whereas for more complex and expensive decisions (such as buying furniture or a car), deliberation would tend to decrease the likelihood of picking the optimal product. From the Science Magazine news report: 'as the decisions become complex (more expensive items with many characteristics, such as cars), better decisions and happier ones come from not attending to the choices but allowing one's unconscious to sift through the many permutations for the optimal combination.'



More information on the research:

There is also news coverage of this study on Reuters and WebMD.

The actual Science paper citation is: Ap Dijksterhuis, Maarten W. Bos, Loran F. Nordgren, Rick B. van Baaren "On Making the Right Choice: The Deliberation-Without-Attention Effect" Science 17 February 2006: Vol. 311. no. 5763, pp. 1005 - 1007 (for those with subscriptions: DOI 10.1126/science.1121629). The abstract from the actual paper says:

Contrary to conventional wisdom, it is not always advantageous to engage in thorough conscious deliberation before choosing. On the basis of recent insights into the characteristics of conscious and unconscious thought, we tested the hypothesis that simple choices (such as between different towels or different sets of oven mitts) indeed produce better results after conscious thought, but that choices in complex matters (such as between different houses or different cars) should be left to unconscious thought. Named the ''deliberation-without-attention'' hypothesis, it was confirmed in four studies on consumer choice, both in the laboratory as well as among actual shoppers, that purchases of complex products were viewed more favorably when decisions had been made in the absence of attentive deliberation.

I find the result of this study very interesting. Anecdotally, I'm sure many of us have had the experience of trying to make a difficult decision, and going in circles: reconsidering the same data over and over again. Ultimately, I often just "make a choice!" and interestingly, this choice often turns out to be a good one. This study suggests that we should follow our instincts even more often. It's also possible that people would suffer from Buyer's remorse less intensely if they spent less time worrying about their purchases (both because their purchases were, on average, more optimal, and because they would not have stressed-out over the purchase in the first place quite so much).


Edit: Looks like someone else submitted the same news item.

top

AMD Vendor section sucks

kebes kebes writes  |  more than 8 years ago

This is not a rant about how slashdot has "sold out" with the recent inclusion of a "Vendors" section. Slashdot has always been a commercially-supported community-driven site (i.e.: we look at ads to support the bandwidth costs). That's fine. I actually think the "Vendors" section is a great idea. We get the slashvertisements put into a logical place, so that people can easily ignore them if need be.

However, the AMD vendor section (as of this writing there is only an AMD in the vendor list) is worthless. Apparently it's up to the vendor to post stories into their section, so AMD can decide what to post and how often to post. What have they chosen to do? They have wasted an opportunity by making their section useless and worthless. They have posted literally dozens of stories every day. All of the stories (of course) point to new AMD products, or articles about how AMD is cool and whatnot. This in and of itself is fine... but the fact is that they've killed the part of slashdot that actually makes it fun and useful: the comments. Each story has a comments section, but NO ONE ever posts comments.

The reason that people avoid it is simple. No, it's not because we dislike talking about commercial products (lots of stories are about our fav. commercial products)... it's because they've overloaded that section, making it impossible to find a good place to have a discussion about the product or news release.

It's too late. The damage is done. What they *should* have done is to release one (or two) stories when the vendor section first started. Slashdotters would have been intrigued, and we would have gone there and added some comments; had a few discussions (both about the vendor section, and AMD products, no doubt). As the days went by, people with an interest in AMD would go and look at the latest items, and probably interesting discussions would evolve in the AMD section. This would generate alot of interest in the products, as we would be able to advise each other on what to buy, etc.

Instead, every slashdotter (it would seem) has been turned off from the whole notion of using that section. Based on the number of comments that appear (which is to say, almost none), no one is interested in that section. And it's too late. AMD wasted their chance, and this also means that future vendors won't bother listing in the Slashdot vendor section, since it's obviously useless.

No one is reading those press releases, because there are hundreds and they are not differentiated. If only they had chosen a more sensible route, they could really have generated some interest in the community.

Oh well, another good idea destroyed by over-eager PR-types.

Slashdot Login

Need an Account?

Forgot your password?