The Army Is 3D Printing Warheads
There is an upper bound to how much stuff people will tolerate in a license. If you add even one restriction too many, people will stop using the software at all. If possible, people may fork an older version of the software; if not possible, people will switch to something else, or perhaps start their own project with a different license.
For an example from history, look at what happened to XFree86 when they changed the terms of their license. Pretty much overnight, almost everyone stopped using XFree86 and switched to the then-new X.org project. I'm sure that the XFree86 guys thought that the world would just accept the changes to the license, but that's not what happened; what happened instead is that XFree86 became instantly irrelevant.
So, if RMS takes your advice and adopts the restrictions you propose, some nonzero number of users will fall away, and new forks will begin to appear of the software. Meanwhile the military users will shrug and just deal with it. There is exactly zero chance that your proposed GPLv4 will change the plans of the military, even a little bit.
So now the question becomes: what are you trying to accomplish with your proposed GPLv4? If the benefits outweigh the costs, do it. But do it with full knowledge that there will be costs, and among the costs will be increased fragmentation of open-source software projects (more forks and more new projects).
A CNC machine or a 3D printer can be used to make medical parts, or weapons. It follows that if the military contributes code to control a CNC machine or 3D printer, the contributed code could be used for good purposes. One consequence of your proposed GPLv4 license: code under such a license would no longer receive contributions from the military. Is that part of what you wanted to achieve? I don't see this as a win, myself.
Microsoft's Missed Opportunities: Memo From 1997
If licensed like DOS, it would have every bit as many compatibility problems.
Oh, not as bad, at least at first. The companies licensing MacOS would have had to make suitable hardware, and Apple could have held their feet to the fire to get compatibility and quality.
In those days, there was so much pent-up demand for Mac laptops that there were companies that would buy a Mac, crack it open and pull out the ROMs, build a laptop with the ROMs, and provide some sort of docking station so the original Mac would not be useless. This was about the most expensive way to make a laptop ever, but it was the only legal way to do it. Apple took forever to release a laptop product, and when they did, it was not what the customers wanted (heavy due to the lead-acid battery for one thing). Third-party Macs could have cost significantly more than generic "beige box" PCs and customers would have paid happily.
The thing is, Apple was charging crazy money for Macs. If Apple had adopted the Microsoft model, they would have had to accept lower margins on each Mac, and made it up on volume. Third-party Macs would have cost less than Apple official Macs but still would have sold a lot and buried the DOS-on-x86 PC. Apple was marking up Macs by about 100%... They were successfully getting a 50% margin on each Mac. Nobody else got away with that kind of markup, before or since.
It was great for Apple while it worked. But eventually Windows got to the point where it was kind of usable. And a Compaq running Windows would cost less than half what Apple was getting for a Mac. Hastings's Law: Adequate and cheaper tends to win against better but more expensive. Windows sales took off and Apple nearly died.
What saved Apple was the PowerBook, a laptop that really was what customers wanted. And a string of other successful products. And now Apple is doing very well. But IMHO, Apple could have had success like Microsoft in the 1990's had they adopted the Microsoft strategy of licensing to everyone and making a small profit on a huge volume; instead they nearly went out of business.
Even now, Apple isn't getting anything close to 50% margins on Macs. Those days are over.
Russia Prepares For Internet War Over Malaysian Jet
From the summary:
U.S. and U.K. news organizations are studiously trying to spread the blame
WTF? Is this intended to somehow suggest that the USA and/or UK share some portion of blame?
The article linked in that part of the summary is a CNN article making the case that shoulder-fired missiles cannot reach 33,000 feet, so it must have been military gear. That's it... it even notes that both Russia and the Ukraine have such missiles.
This is news, and a news organization is reporting on it. Go figure. "trying to spread the blame"? "studiously", even! Really?
Microsoft's Missed Opportunities: Memo From 1997
At the time, discontinuing the licensing of Mac clones was the right thing to do. All they did was tarnish Apple's image.
Actually, I agree with both you and the person to whom you are responding. Apple could have killed Windows by licensing out Mac OS, but it was the wrong thing at the time they actually tried it.
The Microsoft approach was to license out DOS and Windows to anyone who wanted it, taking a small royalty per copy and making money on a huge volume. The Apple approach is to make more money per unit, while selling fewer units. I firmly believe that if Apple had tried the Microsoft approach in, say, 1988, they would have won big-time. Windows was still a joke in 1988, and people were spending crazy money to buy Macs.
Licensing out Mac OS in small volume gains the benefits of neither approach. If Apple only got small volumes, they couldn't make Microsoft levels of money on a small royalty; yet cheap "clones" reduced their ability to charge large amounts on small volumes.
Steve Jobs never wanted the Microsoft approach anyway. He wanted to sell premium stuff that looked awesome and commanded a premium price. But I wish that Apple had embraced the Microsoft model early; we'd all be running Motorola processors rather than x86.
Ode To Sound Blaster: Are Discrete Audio Cards Still Worth the Investment?
I am about to buy an external audio device. To my knowledge, this is the best device you can get for a similar amount of money... you can spend a lot more money to get something about as good, or spend less money and get something worse.
The device is called an O2 amplifier plus ODAC. It was designed by someone who went by the name of "NwAvGuy".
The O2 is a really clean analog amplifier, and is actually open source hardware. You can get the parts list, order the parts yourself, solder everything together, and have your own O2. You can pair it with any DAC, but NwAvGuy also designed a DAC called the ODAC. He(?) said that he would have liked to make the DAC open source as well, but it wasn't practical.
I will buy mine from a company called JDS Labs. They sell a single nice integrated device with O2 and ODAC in one enclosure.
There are audiophiles who sneer at the O2 because it doesn't cost enough. At my previous job I spent hours listening to music on an O2 with Sennheiser 650 headphones, and I want to be able to listen to music with that level of quality again. I am willing to spend my own money to do it.
I thought about buying a really nice DAC but I always hesitated to spend the money because it can be hard to figure out what is worth the extra money, and what is just extra expense. I am friends with a world-class audio geek, and he agrees that this is a good quality audio device. If you want top quality and you are spending your own money, get or make an O2.
Canada Poised To Buy 65 Lockheed Martin F-35 JSFs
Sometimes a new thing looks like a disaster for a while, but in the long run proves itself. The M-16 rifle is a tremendously successful design, but there were issues with the first models that made it look like a huge mistake.
So I am watching the F-35 and I am wondering: will this be as big a disaster as the nay-sayers claim, or will this work out in the long run?
I'm guessing it will limp along as a middle-of-the-road thing: not a complete horrible disaster, just a really expensive airplane that doesn't live up to its expectations.
Also, I have read that it is intended that a bunch of F-35s will share data with each other, and help each other detect and deal with threats; but the giant costs of the program have made it much less likely that enough F-35s will fly together at one time for this to work out.
One thing I am certain about: It's a mistake to try to replace the A-10 Warthog with F-35s. I don't even understand how the F-35 is supposed to do the same mission.
Apple's 2014 WWDC Keynote Will Be Streamed Live; Hopes For a Microconsole?
Apple is known for limiting the number of different products. IMHO Apple is unlikely to ship a "microconsole" and continue to ship the Apple TV.
Much more likely: the "4th generation" Apple TV, which will not only do everything an Apple TV does, but will also play games if you buy a controller.
According to Wikipedia, the current Apple TV uses a single-core ARM chip. For gaming, Apple should put in a more powerful chip, which may imply a price hike. Perhaps Apple will continue to sell the current generation as a less-expensive model, for those who don't care about games.
Researchers Experiment With Explosives To Fight Wildfires
In the movie Fires of Kuwait, my favorite part showed a modified tank called "Big Wind".
Instead of a cannon, "Big Wind" has two jet engines from a MiG fighter plane, and it uses those to blow out fires the same way you might blow out a candle on a birthday cake, only at epic scale.
It's probably more practical, for wildfires, to use a helicopter to deliver explosive devices rather than drive a tank around. Setting up the water reservoirs in advance would be a problem also. The tank worked very well in Kuwait, though!
The Sci-Fi Myth of Robotic Competence
Stuff like this:
SF writers invented the robot long before it was possible to build one. Even as automated machines have become integral to modern existence, the robot SF keeps coming. And, by and large, it keeps lying. We think we know how robots work, because weâ(TM)ve heard campfire tales about ones that donâ(TM)t exist.
The myth of robotic competence is based on a hunch. And it's a hunch that, for the most part, has been proven dead wrong by real-life robots.
Actual robots are devices of extremely narrow value and capability. They do one or two things with competence, and everything else terribly, or not at all.
The article doesn't contain the phrase "the Three Laws would only work on a true AI" but it really does discuss the fact that fiction shows AI and we don't have it.
The Sci-Fi Myth of Robotic Competence
Sorry to say it, but I think it is you who has missed the author's point entirely.
The author asked the question: if a car can save two lives by crashing in a way that kills one life, should it do so? And many people rejected the question out of hand.
The author listed three major ways people rejected the question:
"Robots should never make moral decisions. Any activity that would require a moral decision must remain a human activity."
"Just make robots obey the classic Three Laws!"
"Robots will be such skillful drivers that accidents will never happen, so we don't need to answer this question!"
All of those responses are not well-reasoned and that is the whole point of TFA.
The author went on to point out that the Three Laws are fictional laws that were applied to fictional full AIs that we don't have in the real world.
P.S. I do think that robot car drivers will rarely have crashes. As others have pointed out, the AI never gets sleepy or bored, and never takes stupid chances due to impatience. AI cars drive in a boring way, and if the majority of all cars were doing that, there would be a great reduction in crashes.
That said, of course the AI must be programmed with some strategy to cope with a crash. I'll bet that in the current generation it's mostly "swerve in a direction that doesn't appear to have any obstacles" and "stomp on the brakes" but there has to be something.
This is a specific case of a general problem: navigating cost/benefit tradeoffs. Suppose I have a new car design, and it is safer than old car designs. Then the more people switch to the new car, the more lives are saved. But the more expensive the car is, the fewer people buy the car. Now, I could add one more feature, and it makes the car even safer but it also makes the car even more expensive. Do I add the feature? Then fewer people get the safe car, but those people are extra safe. Do I omit the feature? More people get the safe car but it isn't as safe as it could be. How do you decide?
You use math, and do your best. But some people will reject the question. "It's immoral and shocking to reduce human lives to numbers in an equation..." Oh yeah, it's so much more moral to just guess at what to do, rather than try to apply math to the problem.
The Sci-Fi Myth of Robotic Competence
Asimov's Three Laws of Robotics are justly famous. But people shouldn't assume that they will ever actually be used. They wouldn't really work.
Asimov wrote that he invented the Three Laws because he was tired of reading stories about robots running amok. Before Asimov, robots were usually used as a problem the heroes needed to solve. Asimov reasoned that machines are made with safeguards, and he came up with a set of safeguards for his fictional robots.
His laws are far from perfect, and Asimov himself wrote a whole bunch of stories taking advantage of the grey areas that the laws didn't cover well.
Let's consider a big one, the biggest one: according to the First Law, a robot may not harm a human, nor through inaction allow a human to come to harm. Well, what's a human? How does the robot know? If you dress a human in a gorilla costume, would the robot still try to protect him?
In the excellent hard-SF comic Freefall, a human asked Florence (an uplifted wolf with an artificial Three Laws design brain; legally she is a biological robot, not a person) how she would tell who is human. "Clothes", she said.
In Asimov's novel The Naked Sun, someone pointed out that you could build a heavily-armed spaceship that was controlled by a standard robotic brain and had no crew; then you could talk to it and tell it that all spaceships are unmanned, and any radio transmissions claiming humans are on board a ship are lies. Hey presto, you have made a robot that can kill humans.
Another problem: suppose someone just wanted to make a robot that can kill. Asimov's standard explanation was that this is impossible, because it took many people a whole lot of work to map out the robot brain design in the first place, and it would just be too much work to do all that work again. This is a mere hand-wave. "What man has done, man can aspire to do" as Jerry Pournelle sometimes says. Someone, somewhere, would put together a team of people and do the work of making a robot brain that just obeys all orders, with no pesky First Law restrictions. Heck, they could use robots to do part of the work, as long as they were very careful not to let the robots understand the implications of the whole project.
And then we get into "harm". In the classic short story "A Code for Sam", any robot built with the Three Laws goes insane. For example, allowing a human to smoke a cigarette is, through inaction, allowing a human to come to harm. Just watching a human walk across a road, knowing that a car could hit the human, would make a robot have a strong impulse to keep the human from crossing the street.
The Second Law is problematic too. The trivial Denial of Service attack against a Three Laws robot: "Destroy yourself now." You could order a robot to walk into a grinder, or beam radiation through its brain, or whatever it would take to destroy itself as long as no human came to harm. Asimov used this in some of his stories but never explained why it wasn't a huge problem... he lived before the Internet; maybe he just didn't realize how horrible many people can be.
There will be safeguards, but there will be more than just Three Laws. And we will need to figure things out like "if crashing the car kills one person and saves two people, do we tell the car to do it?"
Game of Thrones Author George R R Martin Writes with WordStar on DOS
Dos can access a lot more than 640k - the limit on real mode access is 1mb.
True! So, if DOS can access 1 MB, where does the 640K limit come from? Long story short, it's because IBM's BIOS sucked.
Okay, longer story:
Everyone was supposed to use the BIOS for basic operations including writing text to the screen. But the BIOS was poorly designed; the only way it had to write to the screen was to write one character at a time per call into the BIOS. And calling into the BIOS was kind of slow (remember we are talking about computers three orders of magnitude slower than current computers... 4.7 MHz processor).
Since the BIOS was too slow, people didn't use it. Instead, they figured out the address of the screen buffer in the graphics card, and just wrote the desired text directly into the buffer. So much faster!
But this meant that all the most popular software for DOS was not using the BIOS, and had a particular hardware dependency hard-coded. And the standard address for the frame buffer just happened to be 640K. (Well, there were two addresses, depending on whether the user had a mono or color card, but 640K was the lower of the two.) The address was chosen back in the days when RAM was really expensive, and computers might only have 64K or even less. So, nobody saw a problem coming... and besides, everyone was going to be using the BIOS, right? So you should be able to move the graphics card, change the BIOS, and all the software still would work. Whoops.
With the benefit of hindsight, what should have happened was: a DOS program uses the BIOS to query the address of the frame buffer, so the graphics card can move around anywhere in memory. And the BIOS should have had a "write whole string" function from the beginning. (Much later versions of the BIOS had a "write whole string" function but I don't think any popular software ever used it, as it was not available in the giant installed base of old DOS computers.)
Game of Thrones Author George R R Martin Writes with WordStar on DOS
If it's working for him, then this makes sense.
What a non-story!
P.S. I assume that no words or names in his fantasy world have any accents or any characters not in the basic ASCII set. DOS WordStar is notably lacking in support for extended characters of any sort. (In fact DOS WordStar uses the high bits of characters for its own purposes, so it cannot ever work with anything beyond 7-bit ASCII.)
The Feature Phone Is Dead: Long Live the 'Basic Smartphone'
Huh, I didn't know about this. Now I wish I could edit my original post.
I just did a few Google searches. Results:
Feature phone apps are big business in India (article from 2011):
Facebook just spent $16 billion for a company that produced J2ME apps:
So I guess the term "feature phone" and the term "smartphone" are fuzzier than I thought. The more a phone looks like an iPhone or Android phone, the more it is a "smartphone" I guess.
The Feature Phone Is Dead: Long Live the 'Basic Smartphone'
A "feature phone" is a phone that does more than just let you make calls, but is less not as powerful as a smartphone. I'd say that the key difference is that a smartphone lets you install apps, while on a featurephone, the only "apps" you get are the ones that came pre-loaded. You get what came with the phone and nothing else.
Also, everyone expects a "smartphone" to have a multitouch screen these days. In the early days of smartphones, some phones didn't have this (e.g. the classic Blackberry had no touch screen at all, just a trackball!). Feature phones are less expensive than smartphones because they omit the fancier components like a multitouch screen.
There are a few people who want the simplicity of a feature phone... for example, some people really don't like it when their phone locks up or spontaneously reboots. (I don't like it either, but I'll put up with it happening from time to time in return for things like a web browser.) But in the long run, it will be cheaper and easier for the mobile carriers to just offer smartphones. Why pay developers to write custom "apps" for a phone, when you can just slap Android on the thing and pre-install a few Android apps?
Oklahoma Moves To Discourage Solar and Wind Power
You are correct that there is no good way to store grid-level amounts of power. The best we have is pumped hydro, but we will never build any more of that. (Environmentalists are working hard to try to tear down existing dams, so good luck building new ones to make pumped hydro storage. And the best sites have already been built anyway.)
I do have hopes that the Ambri liquid-metal batteries will work as promised. I'm not an expert on this stuff, but the technology does seem to make sense as far as I can tell.
Practical grid-scale energy storage would really help solar and other renewable energy sources become practical and dependable.
The Best Parking Apps You've Never Heard Of and Why You Haven't
The point is valid, but not that helpful. Yes, our current system for finding useful apps is imperfect.
Sometimes when you invent a better mousetrap, the world doesn't figure it out and beat a path to your door. It would be great if the best ideas always win in the marketplace of ideas, but sometimes they don't.
And, if you can solve this general problem, you will be very popular.
I think social media can help a bit, but it's no panacea. (TFA noted that the voting for apps doesn't favor the best apps, and the voting system is arguably a form of social media.) Sometimes I hear about cool stuff on Facebook or whatever, but marketers try to spam us even on Facebook and its signal-to-noise ratio is degrading.
P.S. I live near Seattle. I went to Emerald City ComiCon again this year, and I arrived too late to get one of the spaces in the convention center parking. I wound up getting a good space using the BestParking app. So, it worked well for me the one time I tried it.
I found this app by doing Google searches for parking the night before, and finding the bestparking.com web site, which advertised that they had a mobile app.
Mr. Schmidt Goes To Washington: A Look Inside Google's Lobbying Behemoth
Consider the history of Microsoft. In the past, Microsoft didn't expend any significant money or effort on lobbying in Washington, D.C. Then during President Clinton's time in office, Microsoft faced serious threats from the Federal government... the worst being that a Federal judge actually ordered that Microsoft be split up. This order was voided by a higher court, so it didn't happen... but you had better believe that Microsoft took it as a hard lesson.
Microsoft now spends a great deal of money and effort on lobbying in D.C. I don't blame them for self-defense via lobbying. (I do blame them for attacking other companies via lobbying, if they do. See below for allegations that they do.)
Google isn't waiting for D.C. to turn on them; they are lobbying to "manage their relationship" with the Federal government. So is Facebook.
Here's an article from 2008 about Google learning the importance of lobbying. It includes allegations that Microsoft was using its lobbying infrastructure to try to prevent a deal Google was trying to make with Yahoo.
Now I'm picturing Google using its leverage to attack Microsoft, and Eric Schmidt saying "The circle is now complete. In 2008, Google was just a student... now I am the master."
Can the ObamaCare Enrollment Numbers Be Believed?
Other important questions: how many of those 7.1 million have actually paid for the policies, and how many just went through the web site? Also, how many of these policies are insuring the previously uninsured, and how many are insuring people who lost their previous insurance due to the ACA?
I don't have those numbers. Nobody seems to have those numbers... Kathleen Sebelius has said "we don't know that" (see YouTube link below).
I have a suspicion that if the numbers were good, somehow they would have the numbers.
The DailyMail article says that a RAND Corporation study estimates that the number of previously uninsured people who have actually paid for their policies is: 858,000 (well under a million!). I haven't found a source for this. I believe they computed this number themselves, by reading the RAND report and by using the percentages in that report.
Avik Roy read the same report, and reports the number as 1.4 million +/- 0.7 million, i.e. 700,000 people to 2.1 million people, 95% confidence.
I believe this is the RAND Corporation study being discussed: http://www.rand.org/content/dam/rand/pubs/research_reports/RR600/RR656/RAND_RR656.pdf
Navy Debuts New Railgun That Launches Shells at Mach 7
I'm wondering if the $25,000 round is inert, or if it includes guidance.
The Navy has been talking about railgun projectiles with GPS guidance. All it would take is movable steering fins and a computer to drive them.
No matter how good your targeting computers are, I think you need active guidance any time you are talking about a 100-mile range.
If I have done my math correctly, it will take about 67 seconds for a Mach 7 projectile to travel 100 miles (and that's assuming constant speed, not accounting for drag). That's a long time for a free-flying projectile to be subject to random winds.
Of course I'm not a physicist or ballistics expert. If I have made a mistake here, please let me know.