Why Standard Deviation Should Be Retired From Scientific Use
I'm a little surprised at Nassim Taleb's position on this.
He has rightly pointed out that not all distributions that we encounter are Gaussian, and that the outliers (the 'black swans') can be more common than we expect. But moving to a mean absolute deviation hides these effects even more than standard deviation; outliers are further discounted. This would mean that the null hypothesis in studies is more likely to be rejected (mean absolute deviation is typically smaller than standard deviation), and we will be finding 'correlations' everywhere.
For non-Gaussian distributions, the solution is not to discard standard deviation, but to reframe the distribution. For example, for some scale invariant distributions, one could take the standard deviation of the log of the values, which would then translate to a deviation 'index' or 'factor'.
I agree with him that standard deviation is not trustworthy if you apply it blindly. If the standard deviation of a particular distribution is not stable, I want to know about it (not hide it), and come up with a better measure of deviation for that distribution. But I think the emphasis should be on identifying the distributions being studied, rather than trying to push mean absolute deviation as a catch-all measure.
And for Gaussian distributions (which are not uncommon), standard deviation makes a lot of sense mathematically (for the reasons outlined in the parent post).
Ask Slashdot: DIY Computational Neuroscience?
You'll find that the academic system is not as elitist as you would think, at least not in countries where the barriers to entry are low. It only takes a few years to work through a basic degree, and if you have any ability, you can soon move beyond that to real research and contributions.
The reality in academia is that you can put up any ideas at all, and the peer-reviewed journals have a huge diversity of contributors. Sometimes the strongest papers are those that argue an idea to its logical extreme; although reality is usually shades of gray, this diversity of ideas all add insight.
The reason why academia may reject certain ideas is not because the ideas are radical (there are a myriad ideas in the university context), but because those ideas have not been thought through. There is no substitute for years of developing an idea and a coherent presentation of it.
I don't want to gild the lily here; there are many contexts which make it hard for someone to participate in academia, whether for economic reasons or time commitments. But that's a social reality. If many people don't have "a room of one's own" as Virginia Woolf put it, then we have to push for more than changes in the academies; we have to push for fundamental social reform.
Black Silicon Slices and Dices Bacteria
In addition, the asbestos crystal lattices tend to fracture linearly, which results in them remaining in the organism causing damage for a lot longer.
Researchers Dare AI Experts To Crack New GOTCHA Password Scheme
My understanding is that the password in the exercise is used as a seed to generate the 'gotcha' images. So yes, you then have to match these up to descriptions after entering the password. The aim is to slow down brute forcing of the password.
So for each password try the AI then has to come up with reasonable permutations of the images as compared to a set of descriptions. Only if it can restrict the permutations enough can it run fast enough to brute force the password/permutation hash.
I don't feel the solving gap between humans and AI will be wide enough. Some of the descriptions are too vague for humans to solve: words like 'alien', 'thing', 'guy', 'woman', 'face' don't convey enough visual information, and fit most of the images. Other descriptions are clues for a bot: color words narrow down the permutations (especially since they usually refer to blobs near the centre line); and common placement words like 'head', 'nose', 'eyes', 'mustache' can be linked to particular areas of the image. Clearly a human will do this more easily, but it is doubtful that a human will find only one permutation, and a bot may be able to narrow it down enough.
I can't imagine wanting to attempt this challenge unless I was convinced that humans could select close to the correct permutation for each of the puzzle sets. If a human cannot do it reliably, then it would be unreasonable to expect a bot to have any chance at all.
USB Implementers Forum Won't Play Nice With Open Hardware
This isn't a recent change; component distributors such as mecanique (see https://web.archive.org/web/20070825070852/http://www.mecanique.co.uk/products/usb/pid.html) used to on-sell blocks of PIDs from their VID many years ago, but the USB-IF started cracking down a number of years ago. Likewise, voti.nl was threatened with legal action (see http://www.voti.nl/shop/catalog.html?USB-PID-10).
For some projects, you can obtain a PID from the manufacturer of a USB chip (eg http://www.ftdichip.com/Support/Knowledgebase/caniuseftdisvidformypr.htm), but this generally means using the manufacturer supplied driver, and doesn't really help if you want to customize things more.
There doesn't seem to be a reasonable solution for small runs beyond the prototype phase. So in effect the USB-IF is motivating hobbyists to simply reuse VID/PID pairs from similar devices, which is only going to lead to compatibility headaches in the future.
I can understand that they wish to have an orderly process so that operating systems can have automatic device recognition and driver installation, but it is short-sighted not to provide an opportunity to licence a much smaller address space at a reasonable cost.
(For futher information, the prototype VID is 0x6666 and many known VID/PID pairs in http://www.linux-usb.org/usb.ids)
Scientists Say Climate Change Is Damaging Iowa Agriculture
I suspect you are being deliberately obtuse - the disturbing aspect to the ozone measurements wasn't so much the finding of a hole, but that the levels were reducing markedly from year to year.
The cookies are disappearing from the jar. The typical red-face response is "it wasn't me". You go further and say "the cookies weren't ever there". Yet day by day the cookies continue to disappear.
Nokia Design Guru Urges Apple To End Cable Chaos
The difference being that Apple makes it very difficult for third party manufacturers, while the USB consortium are keen to get broad industry support. For Apple the patent seems to be used to exclude competition, while the USB patent holders and USB manufacturers are engaged in reciprocal and royalty-free licensing arrangements.
From a libre point of view, a patented standard is not the same as a patent-encumbered standard; the difference lies in the licencing.
LLVM's Libc++ Now Has C++1Y Standard Library Support
I don't get the argument "It's not universally supported yet". It's a compiler, not an operating system. Unless perhaps you are trying to share a code base across extremely diverse systems. And even then, the backward compatibility is such that it just means you have to be careful about restricting certain features in the shared part of the code base. Most developers can cope with working on multiple dialects of a programming language, even, astonishingly, multiple languages! It doesn't make sense to not use language features when you can (unless you think those features are bad or buggy). And we have been working with compilers that don't have full compliance to any standard for a long time now!
I can understand some caution in adopting a new compiler, but the C++11 features have been around in nascent form long before the standardization happened. It isn't like the committee doesn't take time to make decisions...
U.S. Gov't Still Fighting the Man Behind Buckyballs; Guess Who's Winning?
While I agree that the product was not defective, and that banning was out of proportion to the scale of the problem, there was a potential for harm if they were given to children (not that they were marketed in that way). There have been a number of incidents in the past with other magnetic toys.
Ask Slashdot: Speeding Up Personal Anti-Spam Filters?
I doubt that he is using "grep -F -f ...", because fgrep can search for a hundred thousand patterns in a megabyte of data in under a second even on a modest machine (and most of the time is building up the regex state machine). I suspect he is using "egrep -f", and lots of patterns with wildcards. Worse, he will be running it once on each email, which means rebuilding the regex state machine each time.
Researchers Reverse-Engineer Dropbox, Cracking Heavily Obfuscated Python App
Actually, it's more like not locking it at all, and putting a big potplant in front of the door to hide it...
Could a Grace Hopper Get Hired In Today's Silicon Valley?
I studied computer science in the mid 1980s (in Australia), and around 50% of the undergrads were female (unlike in engineering at the time). The ratio now is quite skewed, so something has clearly changed. There is probably a combination of factors, but I reckon we are missing out on a lot of good programmers simply because the field has become less attractive to women.
Adapteva Parallella Supercomputing Boards Start Shipping
But indeed, it is the learning experience that is required, because cores are not getting particularly faster, and we are going to have to come to grips with how to parallelize much of our computing. The individual cores in this project may not be particularly powerful, but they aren't really weak either; the total compute power of this board is more than you are going to get out of your latest Intel processor, and uses a whole lot less power. Yes, it isn't ideal given our current algorithms and ways of writing programs, but massive parallelism is at the centre of performance computing, and will be for the foreseeable future.
Scientists Seek Biomarkers For Violence
Someone must have caught Gattaca on Encore a few weeks ago
No, they lost their daughter in a school shooting (according to the summary). Probably not a good motivator for unbiased science, but an understandable reason why they have taken on this project. One would expect that such a study may indeed find some genetic markers, but as you point out, these would only be indirect and partial origins for an individual's behaviour.
The Pentagon's Seven Million Lines of Cobol
I'm no fan of COBOL (and enjoyed Dijkstra's criticism), but its main flaws are not to do with being verbose. "Hello World" is longer than it would be in most languages, though a lot of that is just overhead; in general the line count in COBOL is not massively bigger than in other languages. Of course, the syntax in each line can be a mouthful, with an excessive use of keywords obscuring the structure, but in coding business logic this is not always a disadvantage (business logic is often convoluted anyway, and at least the code fragments are a little more readable to a layperson).
My point is, although you are probably correct that there is very likely a better choice of a modern language to replace this code, it is going to be a lot more than 70 lines worth. The real saving in code is most likely to come through the refactoring process, whatever language is chosen.
Microscopic "Tuning Forks" Help Determine Effectiveness of Antibiotics
While I don't dispute that there can be improvement in hospital processes, many of these people who died were already at risk. If I have a heart attack, a 0.3% risk of dying through medical error at the hospital is nothing compared to my risk if I don't go to the hospital. What was the average risk of death had a patient not undergone a hospital procedure? Maybe there are classes of patients who should not undergo procedures! It isn't terribly informative to quote such a statistic without any context, and without giving details about the original illnesses and the common errors involved.
It's a bit like saying that many people die in their bed. A true fact, but not on its own enough to warrant an inquiry into bed safety.
MIT Researchers Can See Through Walls Using Wi-Fi
It's somewhat more difficult than just 'seeing through things'; walls are not totally transparent to the wavelengths of interest, and reflections from them dominate the signal. Moreover, this technique makes use of equipment that can't do the precise timing of radar measurements (though this means they can only track the angles of moving targets, not their position). It is a neat sort of hack, and interesting for that reason.
Disney Research Creates Megastereo - Panoramas With Depth
Which points to the shocking state of the patent system, where 'omnistereo' can be patented in 2011 despite having been openly published years earlier. Blimey, even stereoscopic movies were still being patented in 2005! (oh yes, that 'rectilinear' makes all the difference...)
Disney Research Creates Megastereo - Panoramas With Depth
Unfortunately it can't; the method they are talking about relies on the taking of multiple shots that overlap. Because the camera is mounted on a swing arm, the overlapping shots are also taken at different locations, which enables depth to be reconstructed from what are effectively stereo pairs. The only tricky part is that the stereo pairs have been taken facing in different directions, so the algorithm has to compensate for this.
While it is true that the panaromic painting may have been observed from different points at a tower, each spot in the scene only corresponds to a single point in the painting, so the stereo information has been lost.
Teens, Social Media, and Privacy
you shouldn't be ashamed for people to know who you are
... says Anonymous Coward
neonsignal has no journal entries.