Beta

Slashdot: News for Nerds

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Comments

top

How Vacuum Tubes, New Technology Might Save Moore's Law

wanax Re:Not a computing element (183 comments)

That's mentioned in the IEEE Spectrum article (which by the way is about the most clearly written article on an early prototype technology that I've ever read).
The problems are:
-Too high voltage; can be mitigated by better geometry (probably).
-Insufficient simulations at present for improving the geometry, with the caveat that getting better performance (voltage-wise) might compromise durability.
-Because of the above, they don't have a good set of design rules to produce an integrated circuit. They're hopeful about this step, since the technique uses well established CMOS technology and there are many tools available.

Their next targets are things like gyroscopes and accelerometers. I'd say on the whole this strikes me as realistic and non-sensational. But if anybody knows better, I'd like to hear why.

about a month ago
top

Wikipedia Mining Algorithm Reveals the Most Influential People In History

wanax citation puffery (231 comments)

This is no different from trying to come up with ways of measuring scholars' intellectual impact using citation metrics, like the h-factor or the many recent successors to it, which try to repair the weaknesses in a fatally flawed idea. It makes no distinction between positive and negative citation, and it ignores the raw fact of historical precedence, while preserving every historical bias a culture may have.

The most influential people in world history, at least the very top-tier, isn't particularly debatable, but yet this list failed to capture it. In alphabetical order (and assuming they all existed):

Aristotle
Buddha
Confucius
Homer
Jesus
Lao Tzu
Muhammad
Plato
Ved Vyasa

Then there's the next tier, which include people like Al-Hazan, Alexander, Augustine, Einstein, Genghis, Hammurabi, Imhotep, Newton, Linnaeus, Peter (of Russia), Shakespeare, Suleiman, Zeami Motokiyo etc etc, since I'm sure the further I try to extend the list, the more it would converge with my cultural history.

While unsupervised algorithms can often find interesting things in high-dimensional data, they aren't interpret-able without some expert knowledge.. and if you don't have the 9 entries I mentioned above in your top 20 at least, you can toss the method.

about 2 months ago
top

The Flaw Lurking In Every Deep Neural Net

wanax Re:Well what do you know (230 comments)

This is a well known weakness with back-propagation based learning algorithms. In the learning stage it's called Catastrophic interference, in the testing stage it manifests itself by mis-classifying similar inputs.

about 2 months ago
top

Ask Slashdot: Communication With Locked-in Syndrome Patient?

wanax Re:Yes, there are methods available (552 comments)

Shenoy's group is also working with patients these days, but I think they're focused on ALS rather than locked-in.

about 2 months ago
top

Ask Slashdot: Communication With Locked-in Syndrome Patient?

wanax Yes, there are methods available (552 comments)

Yikes, that sounds like a terrible experience. My sympathies to your sister in law and the whole family.

There are several methods available, most prominently implanting arrays of electrode over pre-motor cortex, which can then be decoded online and used to control a computer pointer.

See for example:
http://www.youtube.com/watch?v...

You might want to contact Frank Guenther at BU. Who has worked on this for several years, and has started the Unlock Project particularly for people in your sister in law's situation.

about 2 months ago
top

Male Scent Molecules May Be Compromising Biomedical Research

wanax Re:Molecules shmolecules (274 comments)

There's a huge bias towards using exclusively male mice in many types of research, and the issue of higher variance in female rodent behavior (due to estrous cycle issues, among others) is well known (see eg: pdf).

There are also related problems more generally with stress and over-training in neuroscience. Experienced investigators are able to produce a much less stressful working environment for animals, so they tend to get different results from neophyte investigators even when following the same protocol. This shows up a lot when a different lab tries to replicate the work of an experienced post-doc and gets null results for the first 6 months then suddenly is able to replicate everything. Thus often is attributed to 'correcting' the protocol (often with extensive communication with the previous lab) when often I think the change is attributable to the investigator in the replicating lab becoming experienced enough to relieve stress (I don't have a great link for this, mostly just an observation from having been around quite a few labs).

Over-training is also a problem, since it often takes thousands (sometimes well into the hundreds of) to train animals in complex cognitive tasks, and it's well known from experiments in humans (and a few in non-human primate and rodent) that neural responses shift profoundly between 'trained' and 'over-trained' states, say between amateur and professional ballerinas watching videos of ballet.

However, these issues are a much bigger problem in pre-clinical research than in basic research. Our understanding of the brain is sufficiently limited that the effects we're used to seeing in basic research questions swamp the potential modulation from gender, stress and training factors (unless you're talking about stress research specifically, but they're pretty careful about controlling for these types of effect). The issue with pre-clinical research is that often the difference between the current treatment and proposed treatment is only a few percent (note: if valid, this can mean thousands of lives saved or hugely improved), and so failing to identify and control for factors such as researcher or mouse gender can overwhelm the supposed primary result.

about 3 months ago
top

The Koch Brothers Attack On Solar Energy

wanax Opt-in vs Opt-out (769 comments)

To destroy the world's carrying capacity for humanity we have to opt-in to global thermonuclear war. To destroy that same capacity through climate change simply requires that a modest proportion of the world's population does not opt-out of mitigating carbon release (the Pareto-optimal level of GDP is pretty small, actually, around 2% of global GDP).

about 3 months ago
top

Sand in the Brain: A Fundamental Theory To Model the Mind

wanax Re:Sand in our Brain (105 comments)

Actually, since neurons have functional homeostatic pruning and nonlinear membrane responses, there are quite large values of zero when we're recording firing rate.

about 4 months ago
top

Sand in the Brain: A Fundamental Theory To Model the Mind

wanax Re:Sand in our Brain (105 comments)

With regard to question 2) No.
Question 1 is an ongoing field of research. Some of the work that I've found helpful in approaching the question:
-The Computational Beauty of Nature (Gary William Flake)
-Barriers and Bounds to Rationality (Peter Albin; there are free pdf copies available online).
-A New Kind of Science (Stephen Wolfram; also available free online).

about 4 months ago
top

Sand in the Brain: A Fundamental Theory To Model the Mind

wanax Re:Sand in our Brain (105 comments)

The linked article was horribly written. I'll give a shot at trying to explain it (or rather, a really, really simplified version).

Two of the fundamental problems that neural circuits must solve are the noise-saturation dilemma and the stability-plasticity dilemma. The first is best explained in the context of vision. Our visual system is capable of detecting contrast (ie. edges) over a massive range of brightness, spanning a space of about 10^10. Given that neurons have limited firing rates (typically between 0 and 200hz), there needs to be some normalization criteria that allows useful contrast processing over massive variations in absolute input (more on this later). The stability-plasticity dilemma is that the brain needs to be sufficiently flexible to learn based on a single event (let's say, touching a hot stove is a bad idea), but once learned memories have to be sufficiently stable to last the rest of a creatures' life span.

The stability-plasticity dilemma implies that neural circuits must operate in at least two (as I said, very simplified) distinct states, a "resting" or "maintenance" state, and a "learning" state, and that there is a phase-transition point in between them. Furthermore, these states need to have the following properties regarding stability:
1) the learning state must collapse into the maintenance state in the absence of input (otherwise you get epilepsy).
2) reasonable stimulation (input) during the resting state must be able to trigger a phase change into the learning state (or you become catatonic).

Many circuits/mechanisms have been proposed to explain how the brain solves these dilemmas. Most of them involve the definition of a recurrent neural network using some combination of gated-diffusion and oscillatory dynamics to fit well known oscillatory and wave-based dynamics that have been recorded in neural circuits. Some of these models employ intrinsic learning using a learning-rule (ie. self-organized maps) while others are fit by the researcher. One key point about this class of models (as opposed to the TFA approach) is that they have a macro-circuit architecture specified by the modeler. Typically these models are at least somewhat sensitive to parametric perturbation.

TFA describes another approach, which comes out of research on cellular automata done by Ulam, von Neumann, Conway and Wolfram. This approach posits that parametric stability and macro-circuit organization is only loosely important so long as the system obeys a certain set of rules regarding local interaction (could also be through of as micro-circuit) because it will self-organize to a point of 'critical stability'. In the the two-state model described above, this approach predicts that neural circuits are always at a state of 'critical stability' where maintenance occurs through frequent small perturbations or avalanches, and any new input will trigger a large avalanche, causing learning. Bak has proposed this as a general model of neural circuit organization. One trademark of these type of models is that they show 'scale free' or 'power law' behavior, where the size of an event is inversely proportional to its frequency by some exponential function. Some recent data has shown power-law dynamics in neural populations (a lot of other data doesn't show power-law dynamics).

One big problem with the critical stability hypothesis is that it doesn't deal well with the noise-saturation dilemma: it needs to cause the same general size of avalanche whether it's hit by one grain of sand, or 10^10 grains of sand.

None of this is particularly new, neural-avalanches (albeit in a different context) were postulated in the early 70s. Could some systems in the brain exploit self-organized criticality? Sure, but there is a lot of data out there that's inconsistent with it being the primary method of neural organization.

about 4 months ago
top

Illustrating the Socioeconomic Divide With iOS and Android

wanax Re:Spain loves Android (161 comments)

Having recently been in Spain (with my unlocked iphone 4 in tow), I can tell you that the support for iphones (at least in Barcelona) is terrible. It took trips to 4 different stores to find an iphone 4 compatible prepaid mini-sim (if I had the iphone 5, I would have been SOL and had to pay for roaming data from my US plan). None of those stores prominently placed iphones (although they were available, at least through vodaphone, even the 5 new, but you couldn't use a prepaid sim in it).

I tend to think that the issue is that Spain has a really fractured retail environment, both with a lot of providers (vodaphone/movistar/orange/yoigo and lots of 3rd party options) and with a lot of kiosk type stores. Vodaphone has their own retail outlets, but most of the others seemed to be based in malls, and the malls in turn seemed to have one 'basket' of stores, depending on who owns the mall. During my search for a mini-sim for example, I was sent on a goose-chase from store to store with directions that turned out to be pretty approximate (wrong address, but within about 300 meters of the correct address).

Given that retail environment, I think it's pretty natural that android, with its myriad of slightly customized, provider branded phones etc, fares a lot better than iOS at the moment... People want something that can be supported by their local mall/kiosk.

about 4 months ago
top

Supreme Court Ruling Relaxes Warrant Requirements For Home Searches

wanax O/T but.. (500 comments)

What was the different solution? (I've also wrecked quite a few shirts in my time)

about 5 months ago
top

Major Scientific Journal Publisher Requires Public Access To Data

wanax RIP PLOS (136 comments)

It goes way beyond just genes and patient data. First, there's the issue of regulation. In most biology/psychology related fields, there's a raft of regulations from funding sources, internal review boards, the Dept. of Agriculture (which oversees animal facilities) and IACUCs for example that make it impossible to comply with this requirement, and will continue to do so for a long time. No study currently being conducted using animal facilities can meet this criteria, because many records related to animal facilities (including the all important experimental protocol) must remain confidential by statute (with the attestation of compliance from the IRB and IACUC). Likewise in the case of (any) human research, you'll have to get a protocol past the IRB for protecting subject anonymity, and given the likelihood of inadvertent identity disclosure that will extremely difficult to do.

Second, there's a deep flaw in how the policy is written and how it conceives of data. To wit, the policy defines: "Data are any and all of the digital materials that are collected and analyzed in the pursuit of scientific advances."

Now for starters, there's a loophole big enough to drive several trucks through: In many experimental contexts material necessary for complete understanding of the 'raw data' are not in digital form, but rather in say, lab notebooks. Which leads to the broader issue: what most researchers would be actually interested in seeing publicly disclosed is the 'data set' which is not 'raw data', but data that's processed into a useful, compact form that's suitable for statistical analysis.

However, in many experiments all of the material necessary to understand the 'raw data' (which I'll definite here as the measured result of an assay in a very general sense) is distributed between lab notebooks, digital data collection, calibration and compliance records in facilities archives and several levels of processing often using proprietary and very expensive software. Even if all of those things could be published (see above), the 'raw data' would be mostly worthless because of the vast amount of time and effort required in many cases to turn the 'raw data' into the 'data set'.

The third problem of course, which has been addressed in several places already on this thread is that there's no money in grants to fund the required repositories.

I think at some level this policy is a noble idea, but it's been implemented in a terrible way, and obviously written by people in fields that already have functioning, funded public databases. Either people are going to stop publishing in PLOS from many fields, or they'll drive the truck through the loopholes and it'll be just a toothless as Science and Nature's sharing requirements.

If they really wanted to effectively push for greater transparency, what they should be pushing at the moment is simultaneous publication of the 'data set', which would let fields that don't have standardized databases in place to design standards that would allow their creation.

about 5 months ago
top

Adjusting GPAs: A Statistician's Effort To Tackle Grade Inflation

wanax Re:Use Class Rank (264 comments)

I should have been more specific, since indeed I'm fairly ignorant about the american college experience for many (most? I'll have to check) students. My experience in academia has been nearly entirely in large research universities, with friends and family filling out my knowledge the liberal-arts colleges, and some local colleges. But the entire grade inflation debate has been focused on colleges that have competitive admission (only about 15% or so), so I'll maintain that my experience is relevant.

about 6 months ago
top

Adjusting GPAs: A Statistician's Effort To Tackle Grade Inflation

wanax Re:Use Class Rank (264 comments)

What you link to is one of many examples of 'classic' tests that are 'difficult' because they are not so much tests of 'intelligence' or even 'scholastic aptitude' that we currently fetishize, but are straight out tests of cultural knowledge. That test would be easy for any decently schooled person (read: sufficient family income) at the time, just like the GRE is easy today (I doubt any student in the country in 1869 could crack the 85th percentile on the SAT). Most of the history of standardized testing in the last century has been slowly trying to move away from testing cultural knowledge to something a bit more general, but that change has been limited.

With regard to your uncle, I think it's telling that he retired recently. As was mentioned lower in the thread, one of the symptoms of teachers who are no longer engaged is that they start blaming their students for lack of understanding. Both my parents are professors, and I work at a major research university, so I suspect that I have a better pool to sample than you. Most of what I hear is about 'what great students we have' and 'who could believe that an undergraduate could have written this' etc etc.. Or to make a more concrete example, my Mom is a professor of classics, who's been teaching since the late 60s. She's received about 12 papers from undergraduates over the course of her career that are of such a high quality that she's suggested they revise them for professional submission. Of those papers, 8 have been submitted in the past 10 years.

about 6 months ago
top

Adjusting GPAs: A Statistician's Effort To Tackle Grade Inflation

wanax Re:Use Class Rank (264 comments)

There's a problem comparing sports pros to college students, which is that there are a lot of effects of over-training, sunk-cost psychology and sticky liquidity in terms of skill transfer between sports. I currently work in neuroscience where we have to be very careful in interpreting animal research due to the same issue. College students who are sophmores or juniors have comparatively little cost shifting into a field that's a better fit for them (and likewise there are many more cognate fields), so you wouldn't necessarily expect the same effects on the distribution.

about 6 months ago
top

Adjusting GPAs: A Statistician's Effort To Tackle Grade Inflation

wanax Re:Use Class Rank (264 comments)

Grading on a curve only works for large, introductory courses. The problem is two fold 1) smaller classes cannot be assumed to have a normal distribution and 2) Once you get past intro classes in any subject, there is a strong selection bias so that people in upper level classes all tend to be high level performers in that subject (which also means you can't assume a normal distribution).

The big problem with grades is that they conflate course difficulty and student performance. If you want grades to be a proxy for performance, you have to weight them somehow or other by class difficulty. The problem is nobody can agree on how to rank class difficulty due to academic politics, since nobody wants to be the department that gets the short-end of the stick with class difficulty rankings. In my personal experience, being one of the few people who have taken multiple graduate level classes in 3 disciplines (History, Mathematics and Neuroscience) at that level no field is particularly easier or harder than another, it's just that the type of work one does is very different.

The other issue that I rarely see addressed in all of the 'grade inflation' concern (and which class rank also ignores) is that maybe today's college students are actually working a lot harder than those in 1960 (perhaps due to debt, the weak economy, lack of security from getting a degree etc), and have actually earned a big chunk of the upward grade adjustment. That's certainly been my experience when compared to my own cohort, and that of quite a few professors that I talk to as well.

about 6 months ago
top

Ask Slashdot: DIY Computational Neuroscience?

wanax Re:Study and practice this in private. (90 comments)

To amplify the above comment, as a neuroscientist with a computational background: don't try to go it alone.

There are a few reasons for this:
1) Research in the field is done by groups because the main problem in generating an 'interesting simulation problem' is carefully defining a scope and a target. That's really hard to do, and generally involves careful discussions between people with different knowledge bases and priorities. If you can't give a clear and succinct answer to the question "How, if successful, will this research advance the field?" to somebody like Larry Abbott, you aren't working on a 'real world problem.'

2) The state of the field is generally about 2 years ahead of the published literature. Unless you have collaborators who routinely attend talks and meetings, and know what people in your area(s) of interest are doing, it's very easy to wind up on the wrong track.

3) Modeling is only useful if it leads to experimental predictions that can be tested, and so needs to be part of an ongoing collaborative interaction between people collecting data, people analyzing it, and people modeling it. Without the entire loop in place, it's difficult to make useful contributions. Also related: outside of things like gene arrays, and a few other standardized approaches, most data in the field is collected by bespoke setups, so even understanding how to parse a data set requires interaction with the people who collected it.

So to answer the original questions:
(1) There are so many that it's impossible to specify. Very little computational neuroscience these days requires more than a workstation. You need to get into a collaboration to reduce the scope of the question for it to be answerable.

(2) It's probably easier than you think, but again it requires collaboration with somebody who's in industry or academia (the latter is probably easier). There are several people I know who informally collaborate doing neural modeling or data analysis with established labs. There are plenty of researchers who welcome informal collaboration, as long as it's competent.

(3) It really depends on who you wind up collaborating with, and the type of question. Neuron and Genesis are compartmental modelling simulators, which you'll only use if you wind up working with people on the molecular end of the spectrum (ie. figuring out intracellular processes). Most systems level work is done using Matlab (some Mathematica and Python as well).

(4) Get involved with non-DIYers. Find a lab to collaborate with! Go to SFN next year, and/or ICCNS/ICANNS/CoSyne/etc (see for example: http://www.frontiersin.org/events/Computational_Neuroscience). Go to posters and talk with people. If you see something interesting, ask if they'd be interested in collaboration.. or ask them your question (1). It'll probably take multiple attempts to find the right group, but there are a ton of groups out there.

Finally, I'd just like to emphasize that working on 'real world' problems in neuroscience (computational or not) is a time consuming endeavor. If you don't think you'll be able to devote several hundred hours a year at the least, it'll be hard for you to find tractable problems.

about 8 months ago
top

Ask Slashdot: Best Language To Learn For Scientific Computing?

wanax Re:Python (465 comments)

I have little idea what works for supercomputers and highly parallized data analysis (I've never used one).. I work on data sets that tend to have memory bottlenecks, which I think describes a lot of exploratory data analysis activity... and in the framework, I've found one major advantage of mathematica is that I can leave the data intact, while creating a lot of code that accesses it in multiple forms, due to mathematica's ability to process the symbolic instructions before querying the dataset.

In terms of price of the shiny, I bought my initial license for mathematica for $500.. I've paid on average about $120/year for two licences (work and home) 8 and 6 core respectively.. It's hardly an expense.

about 9 months ago
top

Black Death Predated 'Small World' Effect, Say Network Theorists

wanax Re:interesting question (168 comments)

The wide distribution of silk merely implies that there was some trade -- it doesn't imply at all that the markets weren't so thin that a single caravan's choice of whether to travel or not didn't control the availability of new silk for year(s) at a time. Try reading Hakluyt's voyages some time -- organizing even a single successful long distance trading caravan was not an easy operation.

I think one thing that people often forget about the great steam age of transportation, is that the flows of people were bilateral, and mostly symmetric. While some residual of the passengers who left Europe for, say, the US stayed, mostly they eventually came back to were the left from -- those steam ships leaving from New York were crowded. Comparing that to the Crusades is apples to oranges: Sure, quite a few people left France and the HRE for the middle-east, but nearly all of them stayed once they arrived. Only a very few top-tier nobility and traders ever intended to return to their homes.

The difference between 'large' and 'small' world networks here is that for a small world, we can make the statistical assumption that there will be interpersonal contact between people all over the world at a fairly small tau (say, 4 days). What this research shows is that assumption isn't met by medieval European society at the time of the Black death. Quite likely, because long-distance travel and trade were sufficiently small scale that a few individuals' decisions (say, on hearing about the plague) could radically change the structural dynamics of the network for substantial periods of time.

about 9 months ago

Submissions

Journals

wanax has no journal entries.

Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Create a Slashdot Account

Loading...