Stack Overflow Could Explain Toyota Vehicles' Unintended Acceleration
This should be rule number one for this type of application.
Perhaps it should be rule number one, but actually it's Rule 16.2 of MISRA-C:2004 (Motor Industry Software Reliability Association, Guidelines for the use of the C language in critical systems):
Functions shall not call themselves, either directly or indirectly.
The rule actually appeared first in MISRA-C:1998. Each rule is accompanied by a detailed rationale that I will not reproduce verbatim here as the standard is not open; one must pay for the privilege. The rationale for 16.2 is that recursion may cause stack overflows. I only cite the rule itself because it appears in public testimony and also on the (first) page linked by this story ...... which you obviously did not read.
Because MISRA also disallows constructs such as function call indirection, self modifying code, etc. a compiler is entirely capable of detecting recursion and reporting the violation as an error. MISRA compliant compilers do exactly that.
Yes Virginia, the largest auto manufacturer on Earth ignores the very thing that was designed to prevent simple, common, easily predictable failures such as stack overflow despite the fact that the cost of compliance is much, much smaller than a rounding error for an outfit like Toyota.
Also, despite the fact that Industry dutifully identified this specific problem in a published standard at least 16 years ago, compliance is apparently not yet a requirement by government regulators. I suspect they're too busy investigating child seat manufacturers or Telsa batteries or whatever other politically high profile crisis that giant, engineer-free gaggle of NTSB lawyers fill their bankers hours with.
Darker Arctic Boosting Global Warming
And that, kind reader, is why we must outlaw meat.
Slashdot Tries Something New; Audience Responds!
but I'm surprised so many attribute that to malice.
My accusation was limited to a lack of understanding; never attribute to malice that which is adequately explained by ... a lack of understanding, as they almost say.
The immutability of reader comments has always been a prized feature
Amen. Accountability. It's always been obvious to me why simply revising comments isn't tolerable and I'm glad that view predominates.
And AC isn't a counterpoint to accountability either, for whomever might want to throw that one back at me; AC elevates attributed comments, on several levels.
Slashdot Tries Something New; Audience Responds!
I'm pretty sure contemporary ideas about UX design are inappropriate for Slashdot. The one or two sentences that Twitter/Facebook/WhatsApp accommodate won't work here. This place indulges people that like to write, and people that don't mind lengthy posts.
The beta site shows a serious indifference to that; the amount of wasted space is just amazing. Fully 45% of the comments view is just empty, half of it gone to the infinitely long side bar that Beta fails to wrap into. No one that understands what this site is for could possibly have made that basic mistake for as long as Beta has been in the works.
Bootstrap et al. don't deal with "long form" threaded forums, so that design mentality won't work.
Here is a possibly novel idea that will actually be appreciated by at least this contributor, and probably most others; comment editing with revision control (a la Wikipedia.) It has to be revision controlled or the trolls will abuse editing. Allow readers to punish such trolls with moderation while the rest of us get the benefit of correcting minor mistakes.
There. That wasn't hard. A real improvement that caters to actual contributors, as opposed to hypothetical users that want to scribble a grammatically challenged half sentence 20 times an hour and don't read.
Anyhow, thanks for the step backwards on this and your participation in the conversation. You all could have gone bull-headed and made this situation even worse. So good on your for that.
Fire Destroys Iron Mountain Data Warehouse, Argentina's Bank Records Lost
None at all.
And yes, beta has serious problems. Regressing to having to bang on the "Load More..." button instead of "Load All Comments" while not logged in is one glaring example. The fact that comments don't flow around the end the right side ad bar is another.
This is not Twitter. People write lengthy comments here; please don't piss away space with huge margins and poor layout.
US Democrats Introduce Bill To Restore Net Neutrality
A petition of the White House to `Restore Net Neutrality By Directing the FCC to Classify Internet Providers as "Common Carriers" just attained the 100k signatures required for a response.
I'm sure a number of you would have liked to have known about that and signed it at the time... but the story submission was declined. Guess there were too many terribly important climate change stories or something.
Google Releases Dart 1.1
Is Dart an open language spec?
The language spec is CCA 3 and ECMA standards tracked. The source code is BSD.
The <script> tag has a "language" attribute for a reason, the curmudgeons of Slashdot notwithstanding.
James Gosling Grades Oracle's Handling of Sun's Tech
that piece of software/tech tends to perish
Except when it doesn't. VirtualBox hasn't perished.
A colleague of mine speculated that perhaps Oracle had forgotten about VirtualBox and thus it has been spared the obligatory ruining. Perhaps there is a gang of hard core emulator developers quietly slipping in and out of the building each day, carefully avoiding notice.
Netbeans does actually suck less than Eclipse. That's a low bar, to be sure, but it appears to be acquiring more users than it is repulsing, so there's another counterpoint.
Cornell Team Says It's Unified the Structure of Scientific Theories
So we're just randomly posting that link to every Slashdot story now?
Oracle Attacks Open Source; Says Community-Developed Code Is Inferior
wouldn't Java be a example of the contrary to this?
Yes, but not the best one. The best would be Oracle's database. Despite the fact that Oracle Database Server is not the result of a 'community-based development model,' the product has a long, ugly history of vulnerabilities. For some reason it fails to be composed of 'low-defect code,' despite apparently having all the best financial incentives. The list of vulnerabilities is long and grows regularly.
The only reason Oracle Database Server has never been the victim of a SQL Slammer type exploit is that it is so expensive that most instances exist only well behind corporate and government firewalls that, if not well maintained, at least exist. Many SQL Server admins apparently don't believe in firewalls.
However, [Solaris] is more of Sun's creation than Oracle's.
Likewise with Java.
What Are the Genuinely Useful Ideas In Programming?
Point of order: 32-bit ARM code doesn't even have stack instructions
ARM's generalization of the classic PUSH and POP instructions has always been admirable (at least until they made THUMB which sadly does have these foul instructions,) but the real world uses STMDB (store multiple decrement before) and LDMIA (load multiple increment after) to implement stacks, which is exactly why these instructions exist. 32-bit ARM provides a stack pointer (R13 a.k.a SP as per ARM) and a return address (R14) register. This not merely software convention either; these registers are banked to allow distinct values for these specific registers across processor modes to accommodate the classic call stack in the face of exceptions.
32 bit ARM is every bit as "stack oriented" as anything that has explicit PUSH and POP instructions. There is no pretending otherwise.
Modern ISAs provide large numbers of registers specifically to avoid stack usage
Modern ISAs? Providing a large register file to avoid memory accesses goes back to Berkeley RISC-I (the inspiration for ARM, incidentally) at least. However, what you have then when executing real programs is merely a very limited stack inside the register file. From RISC I: A REDUCED INSTRUCTION SET VLSI COMPUTER:
Our approach is to break the set
of window registers (r10 to r31) into three parts (Figure
7). Registers 26 through 31 (HIGH) contain parameters
passed from “above” the current procedure; that is, the
calling procedure. Registers 16 through 25 (LOCAL) are
used for the local scalar storage exactly as described
previously. Registers 10 through 15 (LOW) are used for
local storage and for parameters passed to the
procedure “below” the current procedure (the called
procedure). On each procedure CALL a new set of
registers, r10 to r31, is allocated; however, we want the
LOW registers of the “caller” to become the HIGH
registers of the “callee.” This is accomplished by
having the hardware overlap the LOW registers of the
calling frame with the HIGH registers of the called
fmme: thus. without moving information, parameters in
mgisters 10 through 15 appear in registers 25 through
31 in the called frame.
What we have here is a hardware accelerated stack based on a large banked register file. An optimization.
Stacks are a software things
If that's true it aligns pretty nicely with Genuinely Useful Ideas In Programming then, no?
And you go too far down that road, suddenly you're teaching FORTH.
Or the JVM instruction set, for something a tiny bit more relevant.
What Are the Genuinely Useful Ideas In Programming?
the heap, the hash table, and trees
There is nothing basic about these. Each is the subject of on-going research and implementations range from simplistic and poor to fabulously sophisticated.
An important basic data structure? Try a stack.
Yes, a stack. What majority of supposed graduates of whatever programming related education you care to cite are basically ignorant of the importance and qualities of a stack? Some of the simplest processors implement exactly one data structure; a stack, from which all other abstractions must be derived. A stack is what you resort to when you can't piss away cycles and space on ....better.... data structures. Yet that feature prevades essentially all ISAs from the 4004 to every one of our contemporary billion transistor CPUs.
Fusion Reactor Breaks Even
I think this is a decent milestone. While the reactor design itself is unlikely to ever break even, hopefully they're at least learning enough about efficiently triggering a fusion reaction that they can apply it to more productive designs
This achievement opens the door for future designs. Inertial confinement works; it needs improvement, but we're no longer debating whether it's possible to maintain symmetry or any of the other many doubts the detractors dwelled on.
The haters of NIF — and there are many — won't permit followup; they'll have it shut no matter what. For them, the whole idea of seeking energy sources that don't demand energy poverty is inherently illegitimate, and they run the show now. But the work and the results won't die at LLNL; there are other people and other nations that haven't decided to turn themselves into a windmill powered nature preserve.
So we'll have to let them take the ball and run with it. At least it will continue, now perhaps with far more enthusiasm.
OpenZFS Project Launches, Uniting ZFS Developers
and it'll be as good / better than ZFS soon
There hasn't been a commit to the official BTRFS tree in over two months. There have only been five distinct contributors during the entire third quarter. The second quarter saw only 70 commits.
That pace is way too slow for a file system with so many 'to be implemented' features. While not dead, at this rate BTRFS will never surpass ZFS in any notable way.
I'm sincerely sorry about that. Linux contributors just aren't getting it done wrt BTRFS, and that's a crying shame; other operating systems should look on in envy at marvelous Linux file systems.
And yes, I should be in there plugging away at it. So should you. But we're not.
That's not Oracle's fault, either. People just don't care enough to put in the effort. We're just here griping about Oracle and the ZFS license issue and poasting about BTRFS being the answer, waiting for someone to do all that brutally hard work.
We're deluding ourselves.
New Operating System Seeks To Replace Linux In the Cloud
The entire Linux kernel, virtual filesystem, daemons, user commands, etc, are just along for the ride.
A JVM relies heavily on a kernel for scheduling, storage (journalling, RAID, LVM, etc.,) network stack (IP stack, filtering, bridging, etc.) and virtual memory management, at least. All of those capabilities must exist; they weren't created because someone was naive. Either they land in some library used by the guest JVM or they land in the hypervisor.
This isn't to say the now 40 year old IBM LPAR model is wrong. It clearly works, and VMWare is independently evolving into the same thing. But there are some exaggerated claims of simplicity being offered here.
The fact is recent x86 CPUs and chipsets have gained powerful virtualization capabilities, including hardware accelerated IO, MMU and interrupt virtualization. This stuff only began to appear in x86 hardware in late 2005 with important new capabilities such as VMCS Shadowing appearing as recently as Haswell (circa June, this year.)
It isn't surprising that people are creating hypervisors to leverage this power. When hardware manufacturers give you a powerful virtualization platform the question is do you use a legacy OS retrofitted to utilize it or adopt something supposedly better by virtue of being built with hardware virtualization as a given.
 FreeBSD 10 offers the bhyve type 2 hypervisor the relies on VT-x + EPT. It can virtualize x86, like VMWare could do in the late 90's, but FreeBSD hasn't had to synthesize a virtual sandbox from scratch because the hardware provides the hard parts, and the end result is superior.
Next-Gen Video Encoding: x265 Tackles HEVC/H.265
we have an agreement which allows us to utilize x264 code in x265
You don't need an 'agreement' to use x264 code because x264 is licensed under the terms of the GNU GPL v2.0. What, exactly, is this agreement supposed to permit?
Hollywood's Love of Analytics Couldn't Prevent Six Massive Blockbuster Flops
I came up with the exact same summation; too much Indiana Jones. Some parts were great. Bilbo and Gollum under the mountain were truly excellent; it really did the book justice. The trolls weren't bad. The dwarf backstory was ok, going far beyond the book and doing it well.
But damn... Radagast the rabbit sledding superhero? The interminable goblin chase sequence....? wow. The whole mountain giant sequence was an exercise in excessive CGI combined with some unexplainable contempt for continuity. At some point during production someone had to think "wtf is this?"
There are two more. It is conceivable they didn't promulgate these mistakes to the remainder, but given that they've undertaken to stretch this relatively simple story over, what, 7.5 to 8 hours of movie... we could be in for a lot more fail.
Visual Studio vs. Eclipse: a Programmer's Comparison
I concur with this. NetBeans is not attempting to be a generic GUI application platform; it is a mere IDE so it weighs a lot less than Eclipse. I moved to NetBeans because Maven integration with Eclipse is still half baked after all these years; with NetBeans you just open the Maven project and things work correctly. I stayed with Netbeans because it performs better and just has fewer hairs. Eclipse not spamming .project and .classpath all over the place is just fabulous as well.
It is Oracle, however. One day it might cost $6000 per "seat."
First Exoplanet To Be Seen In Color Is Blue
It seems improbable that a gas giant would
Does it seem improbable to you? Life on Earth evolved in a fluid.
Even if genesis is not possible in a gas giant atmosphere, large planets tend to have lots of moons and, therefore, lots of opportunities for primitive life to emerge. Extremophiles from such a moon could survive a short trip through space to a gas giant's atmosphere. Some small fraction of those would thrive and evolve in the new environment.
I suspect gas giant atmospheres may actually be very fertile. Life is good at producing simple sphere shapes needed for buoyancy. There are probably gas giants with billions of tons of biomass drifting around.
* memory management is explicit [merriam-webster.com] -- what does this mean?
Quantifying the Performance of Garbage Collection vs. Explicit Memory Management
Automatic vs. Explicit Memory Management: Settling the debate
* deterministic [merriam-webster.com] -- what does this mean?
I thought it was self evident. Here is a discussion of the matter.
* endemic [merriam-webster.com] use of a garbage collector... -- what does this mean?
Pervasive would be a better word. Languages that make garbage collected allocations for most or all things. For example in Java, aside from primitives, all allocations conceptually occur on the a garbage collected heap.
reference-counted heap objects
Reference counting: counting the number of references to an object.
Heap: an arena of memory maintained by a memory allocator. Also CPUs typically have no knowledge of how software manages heaps. You may be thinking of virtual memory
Objects: object in the generic sense of some amount of memory managed on a heap. These lecture notes show the same usage. The editors of this page also use the word 'object' in exactly the same manner when discussing pointers. It's not that hard to follow.
Putting it together we have objects on a heap for which reference counts are maintained; reference-counted heap objects.
"exchange" heap -- what does this mean?
* "local" heap -- what does this mean?
The link I provide to Patrick Walton's blog would get you there. Also, there is documentation, Sorry if discussing a new programming language involves terms you haven't heard. Computing can be like that sometimes.
(note: there is only one "heap" on most CPU architectures, so now we have added abstraction)
Now you are definitely confusing heaps and virtual memory. There are usually many, possibly thousands of heaps on a system at any given time with many distinct implementations of which the CPU is entirely ignorant. Memory allocators and virtual memory are different things.
* via an "owned" pointer -- what does this mean?
Similar to a C++ auto_ptr or unique_ptr. Again, the link I provided would get you there.
* wild pointers -- what does this mean?
Dangling pointer and wild pointer are synonomous.
Use of the exchange heap is exceptional and explicit yet immediately available when necessary -- what does this mean?
I provided a link directly to a discussion of this.
Memory "management" is reduced to efficient stack pointer manipulation -- uhh, what? the language sits around modifying content at %esp and %ebp along with some offsets? sounds far from efficient)
Incrementing a decrementing stack pointer registers is very efficient. Offsets are computed at compile time and the instructions typically require one CPU cycle and no memory access, given a naive model of a CPU. These techniques are a ancient and ubiquitous. Sorry you weren't familiar with them.
or simple, deterministic destruction -- what does this mean?
Others seem to have no difficulty with these terms. In particular, they are not compelled to link merriam-webster at each use, for some reason.
(note: 2nd use of word "deterministic")
Reusing words is an important feature of language.
Compile time checks preclude bad pointers and simple leaks so common with traditional systems languages -- what does this mean and how does this work, considering that the value stored at a pointer (or what it points to) can be manipulated at run-time, so how would the language "deterministically know" (see what I did there?) what's "bad" vs. "good"?
"bad", "wild" or "dangling" pointers are memory safely faults or violations. It is an feature inherent to Rust that they can't exist. Feel free to learn about it.
* ... that is productive, concise
Holy shit! An opinion on Slashdot? Say it isn't so.