Boston Elementary, Middle Schools To Get a Longer Day
Favorite clickbait hook?
Solve all your machine learning problems with this one weird kernel trick!
Global-Warming Skepticism Hits 6-Year High
Ah, this is when I wish Slashdot had a way to delete posts. With a title of "People are tired of the endless guilt trip" I assumed the post intended to say that scientific reporting is creating a guilt trip and therefore a bad thing. But re-reading the second-to-last sentence makes it seem like he's faulting society's reaction rather than the reporting itself. I admit that I may have misinterpreted the argument.
Global-Warming Skepticism Hits 6-Year High
Let me get this straight. You're saying that when people research and consider the negative consequences of their actions, and then attempt to minimize them, they're being irrational? That it's impossible for people to consider these negative consequences without getting paralyzed, and, therefore, that nobody should research the negative consequences of their actions, and everyone should act purely selfishly? That's a great strawman; I know many altruistic people, and none are that stupid.
Most sane people consider it a fundamental goal in life to make the world a better place. It's true that this isn't a rational choice, but then again, it's not a rational choice to act selfishly, because that, too, is based on your emotional response to the stimuli your body receives. In our society, people who make the selfish choice are generally called sociopaths. The only possible explanation for your post is that you are one of them.
Come Try Out Slashdot's New Design (In Beta)
Autorefresh is by far the worst "feature" of slashdot. Sometimes I don't read straight down the page--which means it takes me a long time to read the whole page--and in these cases the autorefresh often happens in the middle of reading a summary. I can't even imagine a use case where users would want this, yet there's no way to turn it off.
Why Hasn't 3D Taken Off For the Web?
Recent technologies suggest that building 3d models is going to become easy and automatic very soon--see, for example, Building Rome in a Day. Sellers will just take a few dozen pictures, or better yet, have a rig that will take a few dozen pictures all at once (cameras are cheap), and then plug the photos into software that automatically generates the 3D model.
If that doesn't work, use a Kinect.
Why Hasn't 3D Taken Off For the Web?
Google already wants to use it in maps. I personally would prefer if Google Maps worked like Google Earth.
Online stores like Amazon tend to show multiple views of products. Why not just provide a 3d model users can rotate themselves? This is especially true for the sites providing models aimed at 3D printing.
Data visualizations use 3D all the time; it's built into most scientific plotting softwares.
Building 3D models of arbitrary scenes from just images is rapidly leaving the research world, as demonstrated by recent 3D reconstruction projects like Building Rome In a Day (A research page, which, by the way could greatly benefit from 3D web). I wouldn't be surprised if artists start uploading their sculptures, or parents start uploading models of their kids' sandcastles.
And these are just the applications I can think of with dumb 3d models, no physics.
Google Patents Software To Identify Real-World Objects In Videos
I don't even have access to read the terms of the grant, since the grant is for my advisor. As far as I know, Google does not have any specific rights to the research, which is why we were able to release the results, algorithm, and code into the public domain, and nobody has ever told me that dissemination or use would be restricted in any way. It's common for compaines like Google to fund public research that they will later have no control over, since this sort of work benefits Google more than it benefits any of its competitors. And no, that benefit doesn't depend on patents; it's simply because Google has access to huge amounts of data, compute power, and machine learning/computer vision expertise.
Google Patents Software To Identify Real-World Objects In Videos
they're patenting a specific method of doing so.
There is nothing specific about the methods they're patenting. I just worked on a very similar project, and after reading the patent, I see very little separating what they patented from what we did. Indeed we don't use dimensionality reduction the way they suggest (although we did use it for a while), and we don't provide specific names for the objects we discover (though we have talked about doing so via crowdsourcing). Indeed our work is more recent than the patent filing, but people have been attempting similar things for ages (e.g. , ...they are very easy to find). Worse, the two papers I cite provide enough detail to actually produce a working system, whereas the patent provides little detail beyond a few references to well-known machine learning and computer vision techniques. And even when they suggest methodology, it's always "maybe we'll use this, maybe not", and further they tend to list several potential methods without any indication that they've researched which ones work.
Microsoft Seeks Patent For "Search By Sketch"
No, Google Goggles does nothing like this. Google Goggles (and Google search-by-image) is, from the experiments I've done, instance-based image retrieval. That is, it can match objects with exactly the same shape (given a picture of the Eiffel Tower, it will return other images of the Eiffel Tower). However, given a drawing, even a good one, the contour shapes won't match quite well enough, and the algorithm will return garbage. The same can be said for 'deformable objects' like dogs and people.
In fact, I'm quite sure that nothing like this exists. I'm not sure about the actual search engine part of all this, but I did see a talk last fall by one of the researchers who worked on ShadowDraw, which I'm reasonably sure is going to be a component of the final system. The real problem that *they* had to solve was the simple fact that the average person is a horrible, HORRIBLE artist. Ask them to draw a rabbit and for 90% of people, it will come out as a blob that might be an animal, but that's about all you can tell. The algorithms they talked about that actually make the system work as well as it does were actually quite impressive--extremely fast contour indexing, contour combination, converting real photos into convincing sketches--it all sounds easy, but I dare you to actually try implementing it.
Now--and let's see what happens to my karma for saying this--I actually kinda think they deserve a patent for this. Not for coming up with the idea of drawing-based search; that idea is obvious. However, making a system that works as well as ShadowDraw is quite an achievement, and more importantly, Microsoft Research would never have released the algorithm to the public unless it could be patent-protected. Patents in this case aren't about protecting Microsoft's innovation; it's about motivating Microsoft to publish for the sake of other innovators.
Where Were the Robots In Fukushima Crisis?
That's about $13 million. To put that into perspective, the Lunar X-prize robotics challenge offers prize money of $30 million; that doesn't even include team sponsorship. According to Wikipedia, the CMU robotics institute's projects alone cost more than $50 million every year. I know...financial crisis and all...but still, a billion yen is not much for robotics research.
MIT Creates Chip to Model Synapses
Mod parent up. The linked article (and the MIT press release) are misleading. The closest thing I can find to a peer-reviewed publication by Poon has an abstract is here (no, I can't find anything throught the official EMBC channels--what a disgustingly closed conference):
And there's some background on Poon's goals here:
The goals seem to me to be about studying specific theories about information propagation across synapses as well as studying brain-computer interfaces. They never mention building a model of the entire visual system or any serious artificial intelligence. We have only the vaguest theories about how the visual system works beyond V1, and essentially no idea what properties of the synapse are important to make it happen.
About two years ago, while I was still doing my undergraduate research in neural modeling, I recall that the particular theory they're talking about--spike-timing dependent plasticity--was quite controversial. It might have been simply an artifact of the way the NMDA receptor worked. Nobody seemed to have any cohesive theory for why it would lead to intelligence or learning, other than vague references to the well-established Hebb rule.
Nor is it anything new. Remember this story from ages ago? Remember how well that returned on its promises of creating a real brain? That was spike-timing dependent plasticity as well, and unsurprisingly it never did anything resembling thought.
Slashdot, can we please stop posting stories about people trying to make brains on chips and post stories about real AI research?
Is Software Driving a Falling Demand For Brains?
The crux of Krugman's argument seems to be the extraordinarily misleading statement that "A world awash in information is one in which information has very little market value." Krugman has obviously never studied information theory. Yes, our world is 'awash' with information, but that's not because machines are especially good at producing it. Machines are only good at copying it.
Krugman's error stems from his conflation of the two definitions of information. By one definition, the physical number of bits that the human race has managed to store on hard drives, the amount of information the human race has produced has been increasing exponentially. However, this is not useful information, and not the kind of information that requires any serious education to produce. The other definition is from information theory, where information is defined in terms of randomness: here, information is the total number of bits that you need in order to convey a signal in its most compressed form (i.e. the 'random' component of the signal that can't be derived from other parts of the signal). By this definition, the fact that I copy the 100mb file 'a.mp4' from my desktop to my home folder does not mean that I have produced 100mb of information; I have produced at most 64 bytes of information, since that's the number of bytes it took for me to describe the new state of the world.
As for the rest of the article, Krugman argues (correctly, I believe) that any job which requires the production of information will remain strictly in the domain of human beings. However, he seems to forget that most physical goods are just copies of other physical goods, and therefore contain very little information. The production of those goods can generally be replaced by machines.
However, there is still some insight in what Krugman says, though you have to think a bit to realize it. Krugman is actually arguing that educations are only valuable if they teach you how to produce information, and that an education which only teaches you to parrot facts makes you very much like a computer, and very much replaceable by computers. Hence why he needed to use lawyers in his example. I don't think us computer scientists have much to worry about from this argument.
Harvard Ditching Final Exams?
If you want statistics on Harvard, here they are:
The rest of gradeinflation.com gives much more information you may find interesting.
The reason for this is that the more students they fail, the better they look.
This is also incorrect. Far more important in the school's rankings are (a) the percent of their admitted class to accept the admissions offer, and (b) a higher number of students who get job offers after graduating. This incentivizes schools to lower failure rates (US News and World Report reports graduation rates and rolls them into rankings because they know it turns off most prospective students), and also to increase grades to make their students' resumes look better.
Intel, NVIDIA Take Shots At CPU vs. GPU Performance
At least as far as parallel computing goes. CPUs have been designed for decades to handle sequential problems, where each new computation is likely to have dependencies on the results of recent computations. GPUs, on the other hand, are designed for situations where most of the operations happen on huge vectors of data; the reason they work well isn't really that they have many cores, but that the operations for splitting up the data and distributing it to the cores is (supposedly) done in hardware. In a CPU, the programmer has to deal with splitting up the data, and allowing the programmer to control that process makes many hardware optimizations impossible.
The surprising thing in TFA is that Intel is claiming to have done almost as well on a problem that NVIDIA used to tout their GPUs. It really makes me wonder what problem it was. The claim that "performance on both CPUs and GPUs is limited by memory bandwidth" seems particularly suspect, since on a good GPU the memory access should be parallelized.
It's clear that Intel wants a piece of the growing CUDA userbase, but I think it will be a while before any x86 processor can compete with a GPU on the problems that a GPU's architecture was specifically designed to address.
NIH Spends $400K To Figure Out Why Men Don't Like Condoms
What an awful article...this one, and the HIV one that everyone keeps citing. This one starts off with the statement from the director of the institute that created it, "male circumcision is a scientifically proven method for reducing a man's risk of acquiring HIV infection." No real scientist would ever make this claim--science does not prove anything.
It gets worse. The way they conducted the studies (in both cases) was to start off with a large group of men, circumcise half of them, and see who comes back with more infections. There's no way to do blinding here, since you're going to know whether or not you've been circumcised. For example, one confounding factor may simply be that circumcisions hurt--maybe the controlled group just had less sex. Unfortunately, they didn't give any evidence for a mechanism, which makes it somewhat difficult to believe it. (As an aside, the mechanism they suggest is that the foreskin helps the HPV cells enter the cells on the surface of the penis--which suggests that it could prevented by simply pulling the foreskin back for a while after sex).
Another odd part about the study--the Herpes/HPV study was done in Uganda, and the one on HIV was done in Kenya. Of course, applying the results of a study to a population different from the one used in the study is generally a problem, but it's even worse in this case, because this whole conversation started because we believed circumcision stops people from using condoms. Kenya and Uganda are both known for disliking condoms, and so the effects of circumcision reducing the use of condoms has been minimized.
35 Articles of Impeachment Introduced Against Bush
I'm interested to hear how you define "lie". I think analytic philosophy has shown that it's nearly impossible to decide whether a statement is "true" or "false" in a completely black-and-white sense.
For me, a lie is any attempt to convince someone else of something that you yourself don't believe. And Bush certainly did this; he knew that the intelligence wasn't nearly as condemning as he wanted America to believe.
leptogenesis hasn't submitted any stories.
leptogenesis has no journal entries.