Beta

# Slashdot: News for Nerds

top

### Does Relying On an IDE Make You a Bad Programmer?

I agree with most of this, but I still use emacs a lot.

Sometimes an IDE can become an impediment if it keeps you from looking at hard-to-find code, makes you in any way lazy about debugging, or helps you read through code too quickly (making happy assumptions). I'm currently working on an Android system and regularly have to jump between application code, Android internal services and components, various JNI or hardware abstraction layers, and kernel code. Sometimes I need to read through standard C library implementations too. So while I like doing some things in eclipse, most of the time eclipse is not configured to let me dig deeply enough or follow calls all the way through the framework. Similarly the interactive debugging utilities are often limited and I resort to reading code and adding print/logging statements. It feels old-fashioned sometimes but on the other hand I have no excuses or reluctance to work hard enough to get a deep understanding of the whole system.

So I'd judge an IDE by its ability to help you quickly navigate around a large and maybe unfamiliar code base. A good programmer's #1 job is probably reading code carefully and accurately. Features like auto-completion and live error checking are much less important. Also keep a tab on the amount of time it takes to keep your IDE configured correctly and able to compile independently from a more automated build system. Don't let it become a timesink, crutch or an excuse!

top

### Can Electric Current Make People Better At Math?

Math concepts unrelated to computation skill (112 comments)

I read "...and improve their understanding of math concepts" with a lot of skepticism. I think that schools love to teach computation skills because they are easy to teach and because success there is very easy to measure. But this skill is relatively unimportant compared with what I would consider "math concepts": How you apply mathematical abstractions to real-world situations (beyond making correct change at a cash register). How you break down a hard problem into less-hard pieces. How to visualize quantitative relationships, develop and use algebraic systems, and so on. These are rarely taught in schools because they are relatively difficult to teach and difficult to measure gains. So computation skills are taught instead, regardless of the fact that cheap computers are billions or trillions of times faster than any human.

Can electric current apply to this kind of conceptual learning? If so, it would have application to nearly all kinds of education, not just math.

top

### How Good Are Charter Schools For the Public School System?

I think one of the major problems with standardized testing is that the scores are over-used. Scores are used for evaluating students, teachers, and schools. Each group has incentives attached: scholarships and admissions for students, salaries and job opportunities for teachers, and higher funding levels for high-performing schools. If these incentive systems were decoupled instead somehow, then meaningful analyses of scores could be made.

I agree that assessment is important, but it's a very difficult problem that we have not solved. Is it better to use a broken approach than have no assessment at all? I personally think that testimonials and self-evaluations are currently more meaningful than test scores in general. Standardized tests usually measure only those outcomes that are easy to teach and test but are unimportant in real life.

Mathematics is an easy example of this. My 6th-grade daughter gets homework sheets that are mostly repetitive exercises to boost speed. A recent example was computing the circumferences and surface areas of circles. Calculators were allowed in this case, so it was an exercise in key punching. Then we went to a pizzeria a few days later and she was happy to compute the circumference of a pizza (without prompting -- her idea). So I asked her "about how many bites would it take to eat a whole pizza?" She didn't know how to work that out, thinking that she was not taught how to solve that kind of question. I wasn't giving her one of the needed inputs, the size of a bite. And also her understanding of surface area was not strong. The problem here is fairly universal: Math students are taught the abstract computation skills but not synthesis of this knowledge. It is very easy to test how quickly students can compute things and even how fast they can look up formulas that match the given knowns with the required unknown, but these are meaningless skills without also learning how to bridge from the abstract to the real, and how to form connections between the principles they learn. Students should be given harder problems that they have to puzzle over, even starting in grade school. It's okay if they don't get the right answer or if they get stumped, as long as they were able to begin to think productively about them. This builds genuinely-useful math skills and hopefully a certain discipline of thought that is not usually innate. This ability to apply math to real-world situations, and by algebra to think of math as a language, is much harder to assess than timed multiplication tests and so it is generally avoided. If a teacher were to spend time on this kind of development in place of the repetitive speed drills, their students would perform worse on standardized tests. The teacher's students would suffer, as would the teacher and their whole school to some degree.

So, is standardized testing better than nothing? I think that yes, it is, but on the other hand almost any other form of assessment would be better still even if it were not standardized.

top

### Simulations Back Up Theory That Universe Is a Hologram

Re:A projection of what? (433 comments)

Mathematics, especially in this context, is just a language for expressing ideas. So I think that in some sense it is possible that some particular string theory really does describe what's going on. I don't know if it's very likely, but the idea of a "final theory" is that it is, in some sense, a complete and accurate description of our universe's mechanics. (The holographic principle is nice because it states that two seemingly very different theories can actually be equivalent.)

I think there are two main caveats here. One is that you never really know when you're "done", and have a theory that is indeed final. We know now that we are not done today because of inconsistencies, but science also does not have even the capability of perfect validation. The other is that a microscopic, reductionist description of physics is not useful or even the correct language for describing more macroscopic effects since basically "more is different." Chemists don't use quantum field theory because it's just not helpful.

And while it's great (even important) to consider philosophical ramifications of theoretical work like this, we have to remember that it's all still conjecture and it will probably always be conjecture. The philosophical spin-offs, so to speak, should never be taken as a way of either supporting or condemning the theory.

I think I'm basically summarizing some of what Wienberg describes in his book Dreams of a Final Theory (1993). That seems old but I highly recommend it!

top

### Physicists Plan to Build a Bigger LHC

Re:WHY NOT IN THE FIRST PLACE !! (263 comments)

That depends on what you mean by "tangible benefits." One argument I've heard for practical, what's-in-it-for-me-today benefits is that the technology produces spin-offs such as techniques to mass-produce rare-earth magnets, the world-wide web, etc. But that's honestly a weak argument because there's a lot of research going on that has similar chances to produce spin-off tech.

For particle physics, the feeling is that we are on the verge of some kind of revolution! Admittedly it's been that way for a few decades now, but the current working theory (the standard model) has a number of deep problems (thanks wikipedia!). Most new theories, and there are a whole lot of them, predict new phenomena just at the edge of our experimental reach. Part of that is because well-meaning theorists prefer to propose theories that are either presently or soon-to-be testable. But part of it is because the experimental frontier has advanced to energies at roughly the electroweak unification point and lots of theories have interesting behavior to predict at this point, broadly speaking.

So it's not just a more-is-better kind of effort that won't stop until we build solar-system-sized accelerators. There really is a sense that a major shift, possibly even a philosophically-challenging development, is nearly within our grasp, within our lifetimes. This is not a "practical" argument for basic science, but only history can tell us what has had short and long-term practical benefits. History does tell us that this sort of pursuit has in the past been enormously beneficial. Maybe we are in a whole new era where new physics will be completely impractical, but that would honestly be surprising if true.

top

### Physicists Plan to Build a Bigger LHC

You hit numerical problems if you calculate it that way. Wikipedia gives a series expansion that works well for large values of gamma:

v (in units of c) = 1 - 1/2 \gamma^(-2)

v = c (1 - 1.8e-10), or 0.99999999982 c

top

### Physicists Plan to Build a Bigger LHC

I don't think in these cases you have multiple labs bidding for the job. You have multiple countries wanting to host the lab, but that's a different story.

The biggest problem for high energy physics is establishing multi-year funding. The US government cannot promise anything beyond a single year of funding. If say $8 B has been spent over 10 years and one year congress says "but I promised to cut spending", then that's the end of the road for that lab. This happened for the SSC in 1993, but also a lot of times since then on lower-profile, some$500 M experiments that were, yes, in construction already.

Now say 15 years later the $10B has been spent, but its not quite done, another$2B would let you finish the project. Do you really throw away \$10B to save 2B? There is no fraud, just a mis-estimation of the costs of building a beyond state-of-the-art machine and slightly larger technical problems than were expected.

Most of the cases I'm familiar with, including the SSC, were not actually budget overruns even though they were politicized that way. If you're a politician who wants to (a) publicly demonstrate how fiscally conservative you are and (b) not actually cut spending on items that might affect the bulk of your constituency, then you cut big science every time. Even if the budget grows on the whole, you've made a statement and some headlines.

top

### Physicists Plan to Build a Bigger LHC

Re:WHY NOT IN THE FIRST PLACE !! (263 comments)

I also wanted to mention the failed SSC in Texas, cancelled in 1993. That would have been running at double the LHC's energy about a decade earlier. In 1993 congress seats were won by senators promising budget cuts, and Big Science had a large target painted on its back. Killing the SSC was a big-profile way of appearing to reduce spending while at the same time not damaging something that many people understood or cared about.

Since that time, the US has proved time and time again that they are incapable of sustaining funding for a long-term science project. All of the high-energy accelerators in the US are operationally shut down, and almost no proposals in the past 20 years or so have survived all the way to producing results before getting scrapped by some budget shortfall in a particular fiscal year. The LHC survives because the US is not such a major (or critical) contributor.

top

### Physicists Plan to Build a Bigger LHC

Re:WHY NOT IN THE FIRST PLACE !! (263 comments)

Well, many of these tunnels, including the one the LHC uses, have been refurbished multiple times already. Cern's main ring was built to be somewhat future-proof, but that was a long time ago. A google search came up with The history of CERN, which dates the groundbreaking to 1954.

In accelerators you have two basic designs: linear and circular(ish). In linear accelerators each boosting element (RF cavity or whatnot) gets one chance to give the beam particles a kick, so the energy is limited to how hard you kick (limited by technology) and how many elements / how long (limited by budget).

In circular accelerators you are limited by synchrotron radiation. At some point the energy pumped into the beam matches the energy lost via synchrotron radiation. To move in a circle you have to accelerate inwardly, and an accelerating charged particle radiates light. At particle accelerator energies, this radiation is in the x-ray spectrum. You can reduce the loss by using a larger ring -- a smaller curvature requires less centripetal acceleration and hence less radiation loss. You can also of course build stronger boosting elements, but the radiation also heats the beamline and surrounding superconducting magnets, so it's not "that simple."

The other thing to vary is the kind of particle accelerated. Electrons have a very small mass and lose a larger fraction of their momentum to synchrotron radiation. SLAC and KEK are linear accelerators that use electrons. (Cornell's CESR is a ring that accelerates electrons too, but at lower energies compared to these others.) Protons are the other obvious choice, which is what Fermilab and CERN's LHC (after the upgrade) are accelerating. Being much more massive, the protons slough off less of their momentum to synchrotron radiation and can be accelerated to higher energies given the same size ring. The disadvantage of protons is that the energy of the proton is shared among its three quarks (and gluons I think) whereas the electron is truly singular as far as can be told.

I've been out of touch lately but as of at least 8 years ago three proposals were being discussed: VLHC -- big ring accelerating protons. Next Linear Collider (NLC) -- long linear accelerator for electrons. Muon collider -- a smaller ring (actually with straight sections like a track&field track) that produces and accelerates muons. Muons are just like electrons only 200 times more massive and is unstable with a half-life of 2 microseconds. The muon collider was thought to be an ideal Higgs factory, but with a lot of design challenges. One of the main challenges is to not only accelerate the muons before they decay, but also collimate, or "cool", the beam very fast as well so that you can create as many head-on collisions as possible.

So the news that the VLHC design is currently in favor is interesting, but this is hardly the first time the issue has been discussed and I doubt it will be the last. Several years ago the NLC design seemed most favorable, but this would, by its length, be limited to a specific design energy and probably be built to produce Higgs, Higgs, and more Higgs. It seems to me like a VLHC would have more discovery potential for more massive Higgs particles, signs of supersymmetry, or whatever else might exist.

top

### Head of Silk Road 2.0 Says It Will Be Back In Minutes If Shut Down

I hear there's petrol in Krokodil. That's pretty addictive and dangerous.

top

### Head of Silk Road 2.0 Says It Will Be Back In Minutes If Shut Down

Are you referring to nicotine and alcohol? I don't believe these are either the most addictive or the most dangerous on an individual usage basis. You may be right if you count aggregate effect (total number of people addicted to smoking or drinking) but this makes a good argument against legalization.

top

### Head of Silk Road 2.0 Says It Will Be Back In Minutes If Shut Down

It's hard to use absolute arguments on this kind of subject. Take your argument to one extreme and you get something like "The problem isn't having easily-available nuclear bombs, the problem is those people who would use them." This statement goes too far at least in one way, that while drugs use causes some collateral damage so to speak, not nearly as much as nukes.

Another problem with legalized (free market) drugs is that many are physically addictive, so users are quickly unable to make free choices regarding their use. Marijuana is less addictive than most, which allows room for debate on legalization. (I voted "yes" on legalization in CO, though I'm willing to admit that may have been a mistake. I'm not sure yet but it makes an interesting experiment in any case.)

I felt like some of DPR's arguments supporting his website were delusional. Particularly this: "Let us assume you have a son who is in his teenage years and you knew they were going to do drugs, what as a parent, would you do? Would you let them go to their friends’ friends’ dealer or would you help them buy from Silk Road ..." As a parent I would not even consider either option for an instant. As soon as I become aware of my teenage (or younger or older) child's drug use, I have a very difficult responsibility and it's not as an enabler or bystander.

top

### Does Software Need a Siskel and Ebert?

Re:Won't work with FOSS. (169 comments)

You also are bringing up a good point that an accurate review of any complex software requires an unreasonably large time commitment. What's the learning curve like? (This beats "intuitiveness".) How often are updates buggy or force re-learning on users? (Beats bugs-I-just-found reports.) How helpful is the community when it comes to resolving problems? What has the history of security flaws been like? How would you estimate the software's long-term viability and adoption by others? Does an experienced user find common tasks to be easy and fast, while unusual tasks are not too difficult? (Beats feature-list comparisons).

To review a movie you need (1) a decent understanding of ... I don't know maybe "filmography". Then you need to (2) watch the movie. After a couple hours you can write about it. None of the important questions about software can be answered after a couple hours' exposure. Combine this with the sheer quantity of software out there and you can see how software reviews aren't as prolific or celebrated as one might hope.

top

### Cable Lobbyist Tom Wheeler Confirmed As New FCC Chief

This claim (regulatory capture) would be possible to argue against if only internet access in the US were cheaper or as cheap as it is in other countries without subsidies. We know that's not true, therefore we have a market (and government) failure. Case closed.

top

### Toyota's Killer Firmware

FTA: "Vehicle tests confirmed that one particular dead task would result in loss of throttle control, and that the driver might have to fully remove their foot from the brake during an unintended acceleration event before being able to end the unwanted acceleration."

top

### Toyota's Killer Firmware

I was reading through comments hoping to find some general opinion of whether or not Barr's findings could have applied to practically any software stack. You usually don't have to work very hard when reading through code before you spot a bug or two. But in my experience most of these bugs are never (or rarely) exposed because they lie in corner cases. But in the case of Toyota's electronic throttle control system, you'd have higher expectations.

It sure sounds like Barr's group indeed found code of "unreasonable quality." I'm just not sure how to put that into proper context. One can always spend more time and money on code analysis and robustness improvements. Did Toyota really fall short of reasonable expectations? It sounds to me like they did, but I'm only hearing one side of the argument.

top

### First Experimental Evidence That Time Is an Emergent Quantum Phenomenon

What I can't yet understand is how this experiment helps validate the theory of time as an emergent quantum phenomenon. It seems more like a demonstration than an experiment to me. What alternative theory is their experiment excluding?

I'm a physicist but that doesn't mean I understand any of this QM stuff. I have a feeling this is a little like experimentally demonstrating Bell's inequality -- one can do experiments whose results are consistent with predictions of QM, and in ways that one might expect other general classes of theories to differ even though you don't have a specific alternative theory to exclude. Most experiments are like this really. But in the case of this time-entanglement experiment I really don't see room for alternative predictions. I think the paper's title acknowledges this: "Time from quantum entanglement: an experimental illustration" (my emphasis).

I'm not saying that the experiment is in any way unhelpful or bad. It's a great idea, but I would not go so far as to say that this is "experimental evidence."

top

### Wireshark Switches To Qt

Re:Props to all sticking it out and trying Qt out! (79 comments)

You're running Kubuntu 13.10? How is it so far?

I've long enjoyed KDE and pyqt programming. It's nice to see the underlying library move forward so successfully. I've found that, at least with pyqt, the QT libraries are rather large to ship around. I hope this doesn't increase the size of wireshark too much. It's nice to be able to easily install and run it on platforms like raspbian.

top

### Ask Slashdot: Best Language To Learn For Scientific Computing?

For numeric code, I find that stepwise debugging is rarely helpful and never necessary. Print statements are my primary tool for spot-checking numbers, data structures, and even for evaluating the general flow of the program. The next tool is to create histograms and other plots of the data you're getting at various stages or calculations. By varying the inputs and seeing the effects on plots (in vague terms), you have a very powerful and underrated diagnostic. The more work you put into analyzing your program's data, the better off you'll be.

I second the Python/C++ combination, and should add that you can do a lot with numpy and scipy so you may very well not need the C++ side of things.

I've also written a lot in Fortran years ago, but only because I was working with a ton of legacy code. Compared to C and C++, Fortran is actually more elegant for pure numerical computing. I'm not actually recommending it, mind you, but I also found the main weaknesses of Fortran could be mitigated by writing wrappers in Perl to get data in and out. Java is not actually bad for numerical code either, and it's arguably easier to learn than C++. I guess I'm saying this to point out that it doesn't actually matter much what language you use. Visual basic is a rare exception -- never use that for any reason. :-) And you have to be aware that high-level languages like Python are going to be slow if you have no consideration for what operations are time-consuming. Be particularly mindful of memory allocation / object creation.

top

# Journals

### TopherC has no journal entries.

Slashdot Account

Need an Account?

Don't worry, we never post anything without your permission.

# Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

• b
• i
• p
• br
• a
• ol
• ul
• li
• dl
• dt
• dd
• em
• strong
• tt
• blockquote
• div
• quote
• ecode

### "ecode" can be used for code snippets, for example:

<ecode>	while(1) { do_something(); } </ecode>
Create a Slashdot Account