×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Comments

top

Sand in the Brain: A Fundamental Theory To Model the Mind

wanax Re:Sand in our Brain (105 comments)

Actually, since neurons have functional homeostatic pruning and nonlinear membrane responses, there are quite large values of zero when we're recording firing rate.

about two weeks ago
top

Sand in the Brain: A Fundamental Theory To Model the Mind

wanax Re:Sand in our Brain (105 comments)

With regard to question 2) No.
Question 1 is an ongoing field of research. Some of the work that I've found helpful in approaching the question:
-The Computational Beauty of Nature (Gary William Flake)
-Barriers and Bounds to Rationality (Peter Albin; there are free pdf copies available online).
-A New Kind of Science (Stephen Wolfram; also available free online).

about two weeks ago
top

Sand in the Brain: A Fundamental Theory To Model the Mind

wanax Re:Sand in our Brain (105 comments)

The linked article was horribly written. I'll give a shot at trying to explain it (or rather, a really, really simplified version).

Two of the fundamental problems that neural circuits must solve are the noise-saturation dilemma and the stability-plasticity dilemma. The first is best explained in the context of vision. Our visual system is capable of detecting contrast (ie. edges) over a massive range of brightness, spanning a space of about 10^10. Given that neurons have limited firing rates (typically between 0 and 200hz), there needs to be some normalization criteria that allows useful contrast processing over massive variations in absolute input (more on this later). The stability-plasticity dilemma is that the brain needs to be sufficiently flexible to learn based on a single event (let's say, touching a hot stove is a bad idea), but once learned memories have to be sufficiently stable to last the rest of a creatures' life span.

The stability-plasticity dilemma implies that neural circuits must operate in at least two (as I said, very simplified) distinct states, a "resting" or "maintenance" state, and a "learning" state, and that there is a phase-transition point in between them. Furthermore, these states need to have the following properties regarding stability:
1) the learning state must collapse into the maintenance state in the absence of input (otherwise you get epilepsy).
2) reasonable stimulation (input) during the resting state must be able to trigger a phase change into the learning state (or you become catatonic).

Many circuits/mechanisms have been proposed to explain how the brain solves these dilemmas. Most of them involve the definition of a recurrent neural network using some combination of gated-diffusion and oscillatory dynamics to fit well known oscillatory and wave-based dynamics that have been recorded in neural circuits. Some of these models employ intrinsic learning using a learning-rule (ie. self-organized maps) while others are fit by the researcher. One key point about this class of models (as opposed to the TFA approach) is that they have a macro-circuit architecture specified by the modeler. Typically these models are at least somewhat sensitive to parametric perturbation.

TFA describes another approach, which comes out of research on cellular automata done by Ulam, von Neumann, Conway and Wolfram. This approach posits that parametric stability and macro-circuit organization is only loosely important so long as the system obeys a certain set of rules regarding local interaction (could also be through of as micro-circuit) because it will self-organize to a point of 'critical stability'. In the the two-state model described above, this approach predicts that neural circuits are always at a state of 'critical stability' where maintenance occurs through frequent small perturbations or avalanches, and any new input will trigger a large avalanche, causing learning. Bak has proposed this as a general model of neural circuit organization. One trademark of these type of models is that they show 'scale free' or 'power law' behavior, where the size of an event is inversely proportional to its frequency by some exponential function. Some recent data has shown power-law dynamics in neural populations (a lot of other data doesn't show power-law dynamics).

One big problem with the critical stability hypothesis is that it doesn't deal well with the noise-saturation dilemma: it needs to cause the same general size of avalanche whether it's hit by one grain of sand, or 10^10 grains of sand.

None of this is particularly new, neural-avalanches (albeit in a different context) were postulated in the early 70s. Could some systems in the brain exploit self-organized criticality? Sure, but there is a lot of data out there that's inconsistent with it being the primary method of neural organization.

about two weeks ago
top

Illustrating the Socioeconomic Divide With iOS and Android

wanax Re:Spain loves Android (161 comments)

Having recently been in Spain (with my unlocked iphone 4 in tow), I can tell you that the support for iphones (at least in Barcelona) is terrible. It took trips to 4 different stores to find an iphone 4 compatible prepaid mini-sim (if I had the iphone 5, I would have been SOL and had to pay for roaming data from my US plan). None of those stores prominently placed iphones (although they were available, at least through vodaphone, even the 5 new, but you couldn't use a prepaid sim in it).

I tend to think that the issue is that Spain has a really fractured retail environment, both with a lot of providers (vodaphone/movistar/orange/yoigo and lots of 3rd party options) and with a lot of kiosk type stores. Vodaphone has their own retail outlets, but most of the others seemed to be based in malls, and the malls in turn seemed to have one 'basket' of stores, depending on who owns the mall. During my search for a mini-sim for example, I was sent on a goose-chase from store to store with directions that turned out to be pretty approximate (wrong address, but within about 300 meters of the correct address).

Given that retail environment, I think it's pretty natural that android, with its myriad of slightly customized, provider branded phones etc, fares a lot better than iOS at the moment... People want something that can be supported by their local mall/kiosk.

about two weeks ago
top

Supreme Court Ruling Relaxes Warrant Requirements For Home Searches

wanax O/T but.. (500 comments)

What was the different solution? (I've also wrecked quite a few shirts in my time)

about 2 months ago
top

Major Scientific Journal Publisher Requires Public Access To Data

wanax RIP PLOS (136 comments)

It goes way beyond just genes and patient data. First, there's the issue of regulation. In most biology/psychology related fields, there's a raft of regulations from funding sources, internal review boards, the Dept. of Agriculture (which oversees animal facilities) and IACUCs for example that make it impossible to comply with this requirement, and will continue to do so for a long time. No study currently being conducted using animal facilities can meet this criteria, because many records related to animal facilities (including the all important experimental protocol) must remain confidential by statute (with the attestation of compliance from the IRB and IACUC). Likewise in the case of (any) human research, you'll have to get a protocol past the IRB for protecting subject anonymity, and given the likelihood of inadvertent identity disclosure that will extremely difficult to do.

Second, there's a deep flaw in how the policy is written and how it conceives of data. To wit, the policy defines: "Data are any and all of the digital materials that are collected and analyzed in the pursuit of scientific advances."

Now for starters, there's a loophole big enough to drive several trucks through: In many experimental contexts material necessary for complete understanding of the 'raw data' are not in digital form, but rather in say, lab notebooks. Which leads to the broader issue: what most researchers would be actually interested in seeing publicly disclosed is the 'data set' which is not 'raw data', but data that's processed into a useful, compact form that's suitable for statistical analysis.

However, in many experiments all of the material necessary to understand the 'raw data' (which I'll definite here as the measured result of an assay in a very general sense) is distributed between lab notebooks, digital data collection, calibration and compliance records in facilities archives and several levels of processing often using proprietary and very expensive software. Even if all of those things could be published (see above), the 'raw data' would be mostly worthless because of the vast amount of time and effort required in many cases to turn the 'raw data' into the 'data set'.

The third problem of course, which has been addressed in several places already on this thread is that there's no money in grants to fund the required repositories.

I think at some level this policy is a noble idea, but it's been implemented in a terrible way, and obviously written by people in fields that already have functioning, funded public databases. Either people are going to stop publishing in PLOS from many fields, or they'll drive the truck through the loopholes and it'll be just a toothless as Science and Nature's sharing requirements.

If they really wanted to effectively push for greater transparency, what they should be pushing at the moment is simultaneous publication of the 'data set', which would let fields that don't have standardized databases in place to design standards that would allow their creation.

about 2 months ago
top

Adjusting GPAs: A Statistician's Effort To Tackle Grade Inflation

wanax Re:Use Class Rank (264 comments)

I should have been more specific, since indeed I'm fairly ignorant about the american college experience for many (most? I'll have to check) students. My experience in academia has been nearly entirely in large research universities, with friends and family filling out my knowledge the liberal-arts colleges, and some local colleges. But the entire grade inflation debate has been focused on colleges that have competitive admission (only about 15% or so), so I'll maintain that my experience is relevant.

about 2 months ago
top

Adjusting GPAs: A Statistician's Effort To Tackle Grade Inflation

wanax Re:Use Class Rank (264 comments)

What you link to is one of many examples of 'classic' tests that are 'difficult' because they are not so much tests of 'intelligence' or even 'scholastic aptitude' that we currently fetishize, but are straight out tests of cultural knowledge. That test would be easy for any decently schooled person (read: sufficient family income) at the time, just like the GRE is easy today (I doubt any student in the country in 1869 could crack the 85th percentile on the SAT). Most of the history of standardized testing in the last century has been slowly trying to move away from testing cultural knowledge to something a bit more general, but that change has been limited.

With regard to your uncle, I think it's telling that he retired recently. As was mentioned lower in the thread, one of the symptoms of teachers who are no longer engaged is that they start blaming their students for lack of understanding. Both my parents are professors, and I work at a major research university, so I suspect that I have a better pool to sample than you. Most of what I hear is about 'what great students we have' and 'who could believe that an undergraduate could have written this' etc etc.. Or to make a more concrete example, my Mom is a professor of classics, who's been teaching since the late 60s. She's received about 12 papers from undergraduates over the course of her career that are of such a high quality that she's suggested they revise them for professional submission. Of those papers, 8 have been submitted in the past 10 years.

about 2 months ago
top

Adjusting GPAs: A Statistician's Effort To Tackle Grade Inflation

wanax Re:Use Class Rank (264 comments)

There's a problem comparing sports pros to college students, which is that there are a lot of effects of over-training, sunk-cost psychology and sticky liquidity in terms of skill transfer between sports. I currently work in neuroscience where we have to be very careful in interpreting animal research due to the same issue. College students who are sophmores or juniors have comparatively little cost shifting into a field that's a better fit for them (and likewise there are many more cognate fields), so you wouldn't necessarily expect the same effects on the distribution.

about 2 months ago
top

Adjusting GPAs: A Statistician's Effort To Tackle Grade Inflation

wanax Re:Use Class Rank (264 comments)

Grading on a curve only works for large, introductory courses. The problem is two fold 1) smaller classes cannot be assumed to have a normal distribution and 2) Once you get past intro classes in any subject, there is a strong selection bias so that people in upper level classes all tend to be high level performers in that subject (which also means you can't assume a normal distribution).

The big problem with grades is that they conflate course difficulty and student performance. If you want grades to be a proxy for performance, you have to weight them somehow or other by class difficulty. The problem is nobody can agree on how to rank class difficulty due to academic politics, since nobody wants to be the department that gets the short-end of the stick with class difficulty rankings. In my personal experience, being one of the few people who have taken multiple graduate level classes in 3 disciplines (History, Mathematics and Neuroscience) at that level no field is particularly easier or harder than another, it's just that the type of work one does is very different.

The other issue that I rarely see addressed in all of the 'grade inflation' concern (and which class rank also ignores) is that maybe today's college students are actually working a lot harder than those in 1960 (perhaps due to debt, the weak economy, lack of security from getting a degree etc), and have actually earned a big chunk of the upward grade adjustment. That's certainly been my experience when compared to my own cohort, and that of quite a few professors that I talk to as well.

about 2 months ago
top

Ask Slashdot: DIY Computational Neuroscience?

wanax Re:Study and practice this in private. (90 comments)

To amplify the above comment, as a neuroscientist with a computational background: don't try to go it alone.

There are a few reasons for this:
1) Research in the field is done by groups because the main problem in generating an 'interesting simulation problem' is carefully defining a scope and a target. That's really hard to do, and generally involves careful discussions between people with different knowledge bases and priorities. If you can't give a clear and succinct answer to the question "How, if successful, will this research advance the field?" to somebody like Larry Abbott, you aren't working on a 'real world problem.'

2) The state of the field is generally about 2 years ahead of the published literature. Unless you have collaborators who routinely attend talks and meetings, and know what people in your area(s) of interest are doing, it's very easy to wind up on the wrong track.

3) Modeling is only useful if it leads to experimental predictions that can be tested, and so needs to be part of an ongoing collaborative interaction between people collecting data, people analyzing it, and people modeling it. Without the entire loop in place, it's difficult to make useful contributions. Also related: outside of things like gene arrays, and a few other standardized approaches, most data in the field is collected by bespoke setups, so even understanding how to parse a data set requires interaction with the people who collected it.

So to answer the original questions:
(1) There are so many that it's impossible to specify. Very little computational neuroscience these days requires more than a workstation. You need to get into a collaboration to reduce the scope of the question for it to be answerable.

(2) It's probably easier than you think, but again it requires collaboration with somebody who's in industry or academia (the latter is probably easier). There are several people I know who informally collaborate doing neural modeling or data analysis with established labs. There are plenty of researchers who welcome informal collaboration, as long as it's competent.

(3) It really depends on who you wind up collaborating with, and the type of question. Neuron and Genesis are compartmental modelling simulators, which you'll only use if you wind up working with people on the molecular end of the spectrum (ie. figuring out intracellular processes). Most systems level work is done using Matlab (some Mathematica and Python as well).

(4) Get involved with non-DIYers. Find a lab to collaborate with! Go to SFN next year, and/or ICCNS/ICANNS/CoSyne/etc (see for example: http://www.frontiersin.org/events/Computational_Neuroscience). Go to posters and talk with people. If you see something interesting, ask if they'd be interested in collaboration.. or ask them your question (1). It'll probably take multiple attempts to find the right group, but there are a ton of groups out there.

Finally, I'd just like to emphasize that working on 'real world' problems in neuroscience (computational or not) is a time consuming endeavor. If you don't think you'll be able to devote several hundred hours a year at the least, it'll be hard for you to find tractable problems.

about 5 months ago
top

Ask Slashdot: Best Language To Learn For Scientific Computing?

wanax Re:Python (465 comments)

I have little idea what works for supercomputers and highly parallized data analysis (I've never used one).. I work on data sets that tend to have memory bottlenecks, which I think describes a lot of exploratory data analysis activity... and in the framework, I've found one major advantage of mathematica is that I can leave the data intact, while creating a lot of code that accesses it in multiple forms, due to mathematica's ability to process the symbolic instructions before querying the dataset.

In terms of price of the shiny, I bought my initial license for mathematica for $500.. I've paid on average about $120/year for two licences (work and home) 8 and 6 core respectively.. It's hardly an expense.

about 6 months ago
top

Black Death Predated 'Small World' Effect, Say Network Theorists

wanax Re:interesting question (168 comments)

The wide distribution of silk merely implies that there was some trade -- it doesn't imply at all that the markets weren't so thin that a single caravan's choice of whether to travel or not didn't control the availability of new silk for year(s) at a time. Try reading Hakluyt's voyages some time -- organizing even a single successful long distance trading caravan was not an easy operation.

I think one thing that people often forget about the great steam age of transportation, is that the flows of people were bilateral, and mostly symmetric. While some residual of the passengers who left Europe for, say, the US stayed, mostly they eventually came back to were the left from -- those steam ships leaving from New York were crowded. Comparing that to the Crusades is apples to oranges: Sure, quite a few people left France and the HRE for the middle-east, but nearly all of them stayed once they arrived. Only a very few top-tier nobility and traders ever intended to return to their homes.

The difference between 'large' and 'small' world networks here is that for a small world, we can make the statistical assumption that there will be interpersonal contact between people all over the world at a fairly small tau (say, 4 days). What this research shows is that assumption isn't met by medieval European society at the time of the Black death. Quite likely, because long-distance travel and trade were sufficiently small scale that a few individuals' decisions (say, on hearing about the plague) could radically change the structural dynamics of the network for substantial periods of time.

about 5 months ago
top

Ask Slashdot: Best Language To Learn For Scientific Computing?

wanax Re:Python (465 comments)

Sage is okay for small-midsize projects, as is R (both benefit from being free).. on the whole though, I'd really recommend Mathematica, which is purpose-built for that type of project, makes it trivial to parallelize code, is a functional language (once you learn, I doubt you'll want to go back) and scales well up to fairly large data sets (10s of gigs).

about 6 months ago
top

Somebody Stole 7 Milliseconds From the Federal Reserve

wanax Re:This is not insider trading! (740 comments)

Indeed, if it's criminal, it'll be wire fraud... and that's the big IF here, since I don't know whether the Fed's embargoes are criminal to breach... But if a reporter releases embargoed information before the agreed time, and you as a trader should know that the information is embargoed (you did get a license, right?), by trading ahead of release you and the reporter have likely engaged in a conspiracy to commit wire-fraud, which is actually a much easier deal to prove than insider trading.

about 7 months ago
top

Team Oracle Penalized For America's Cup Rules Violations

wanax Re:Wow (190 comments)

The boats are incredible, but it's not sailing in any accessible aspect. I love sailing sunfishes on lake morey, or bigger boats on lake champlain (and I know enough about my skill level to avoid of wider waters like the Sound). But what they're doing now is so totally foreign to everybody who's ever sailed a boat... I've watched a few of the 'challenger races' and I could scarcely tell what direction the wind was coming due to the airfoils (they have to both tack coming upwind and gibe going downwind) except for the speed of the boat (~25kn upwind, ~40kn downwind). This was, as far as I can tell from the PR material, meant to make the race more exciting, but since it dropped the number of teams down to 4, there was never any mystery that team New Zealand would challenge Oracle since so few people could afford to build the boats and spend time racing them (even Oracle has cosponsors)... I absolutely agree though that the announcers and TV coverage has been phenomenal.

about 7 months ago
top

Remember the Computer Science Past Or Be Condemned To Repeat It?

wanax Another interpretation (479 comments)

I've always felt like that quotation had another interpretation, one that's much more favorable to the MPs:

If you're an MP, you've probably had to deal with a lot of people asking for money to fund what is essentially snake oil. If you don't understand the underlying 'cutting edge' technology (both plausible and acceptable), one simple test is to ask a question that you KNOW if the answer anything other than "No" that the person is bullshitting, and you can safely ignore them... and as reported the question is phrased in such a way that it would sorely tempt any huckster to oversell their device. I think Babbage's lack of comprehension was due to his inability to understand the idea that the MP was questioning HIM, rather than the device.

about 9 months ago
top

Ask Slashdot: What Is Your Favorite Monitor For Programming?

wanax Re:27" FTW (375 comments)

I used to agree with you, and was religious about getting dual 24" 1920x1200s for my setups (usually Acer). However, last time I upgraded my home machine I finally decided to bite the bullet and shell out the 1k for a 2560x1600 30" (in my case, a DoubleSight DS-309W).. and I could not be happier. The difference in vertical screen space is surprisingly noticeable, and it just about fills my useful-field-of-view at about 22-24" viewing distance, so I don't find myself having to turn my head very much. I have a 27" 2560x1440 on the other wing of my L-desk (hooked up to my laptop while at home) and frankly I've been looking for an excuse to replace it with a 30" the last few months.

One other thing to keep in mind about large displays, is that they need to be mounted at the correct height to be comfortable: when you're sitting in a relaxed posture looking straight ahead, the center of the display should be at eye-level. That's about 4-7" higher for most people than the included stand on a normal height desk. Either get a wall mount/better stand, or make sure you have a few hefty books to put it on (mine is currently mounted on an old Principles of Neuroscience and A New Kind of Science, which I find to be perfectly sturdy).

about a year ago
top

Krugman: Is the Computer Revolution Coming To a Close?

wanax Re:I would argue (540 comments)

Remember the "and" part. Yes, we have abundant energy, but it's not cheap. My ability to get computations per dollar has increased many orders of magnitude in the last 30 years (or 60, but I'm not that old), to the level that my smartphone would have been the fastest computer in the world when I was born. Energy, on the other hand, is within an order of magnitude, the same cost: the real coal price is about the same as in 1800 (see: http://econbus.mines.edu/working-papers/wp201210.pdf and that's externalizing costs of climate change). That may have counted for cheap AND abundant then, but it certainly doesn't now.

about a year ago
top

NY Attorney General Subpoenas Craigslist For Post-Sandy Price Gougers

wanax Re:The concern is liquidity (458 comments)

Would you then say that England during WWI or WWII took the wrong action by taking those three steps? The situation is not very different -- the effected area is wealthy enough to pay for the resources under normal market conditions, there's a shortage of supply, and the infrastructure required to alleviate the situation isn't adequate to meet the demand.

Markets are not magic, they work extremely well in the appropriate domain -- but once you get into a supply or demand shock, core assumptions of market pricing go out the window, whether it be liquidity, or that the real rate of return on investing is greater than holding, respectively.

about a year and a half ago

Submissions

Journals

wanax has no journal entries.

Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...