TCP/IP Might Have Been Secure From the Start If Not For the NSA
I was one of the leading team members at System Development Corporation (SDC) in the 1970's on various secure operating system and secure networking projects for various US and UK governmental bodies.
Some of that work was classified, much was not.
In late 1974 David Kaufman and I were working on network security, particularly on the then monolithic TCP (there was at that time no formalized underlying datagram IP layer.) Among other things we were designing and building a multi-level secure nework, with multi-level verifiied secure switches/routers, for a goverment agency.
In our work we split an encrypted datagram layer off from the underside of TCP. Because of nature of packet ordering, packet loss, retransmissions, as well as aspects of various security algorithms this was not as straightforward as one might think.
What we came up with was a precursor to what are today encrypted VLANS, IPSEC, and key distribution infrastructures.
However, we were not able to publish our work widely. In fact now, 40 years later, there is scarcely anything visible on the public web. Even our work that was published via the then National Bureal of Standards (now NIST) is not easily found. (I have been searching for years for a copy of some work I did on debugging hooks for secure operating systems.)
We also worked on things like capability based computers and operating systems with formal verfication of security properties. During that time I designed and wrote what is aguably the first formally verified secure operating system.) That work, also, tends to remain hard to find.
Vint Cerf was a consultant to our group. He helped. But the major thrust and principle design work was done by our team at SDC.
The US Dept of Defense (which includes several agencies) funded much of this work - and really helped move things along - but their institutional resistence to wide publication meant that many of the ideas and implementations we did in the mid 1970's were invisible to most of the world until they were re-invented decades later.
Ask Slashdot: Does SSL Validation Matter?
The barrier to entry for a cert authority to be recognized by browsers is too high, as a consequence the price for certificates is too high - it is based on near-monopoly conditions.
Smart Power Grid Could Wreak Havoc On Itself
In mulitcast network code it is common to randomize scheduling by a factor of +/- 50% in order to reduced synchronization effects.
Similarly, power use scheduling could be randomized across some range.
Why Any Competing Whois Registry Model Is Doomed
I have long held that competing DNS root systems *can* work - and in fact have been working for long time.
The issue is not whether there is one singular catholic DNS root, but rather the degree of consistency between competing roots.
We all accept that internet users dislike surprise - they will not like any DNS root that give surprising (or misleading or fraudulent answers). That's why any DNS root that gives surprising DNS answers will quickly be shunned.
What is intriguing about competing DNS roots is that they provide a way around ICANN and around ICANN's choices - and ICANN's fees and ICANN's trademark-over-everything-else policies.
I wrote a note on this topic some years ago - "What would the internet be like had there been no ICANN?" at http://www.cavebear.com/cbblog-archives/000331.html
Company Claims Ownership of Digital Messaging
IP multicast has been in active use on the internet since the 1980's.
IP multicast lets receivers join groups, defined by a special class of IP addresses. Senders emit packets addressed to those addresses and the IP mulitcast routing systems (of which there are several) build distribution trees to get those packets to those receivers.
So to the extent that this patent claims include subscription based addressing and transmission of data packets, IP multicast has been a running example of this for at least a quarter of a century.
DisplayPort-To-HDMI Cables May Be Recalled Over Licensing
What about cables that go from DVI to HDMI?
Google Builds Biometric Models of Celebrity Faces
All those security cameras out there are recording everyone. And a lot of that footage is retained.
With this kind of technology all of that past footage could be scanned and a dossier of past whereabouts created.
(Yes, I know that our mobile phones are already reporting on our whereabouts, but at least you can turn a phone off.)
Wardrivers Target Seattle Businesses
It would be easy to set up a weakly protect access point that did nothing but generate bogus transactions with bad credit card numbers - that could pollute the crook's database, particularly if they don't do a good job of recording of which card number came from which network.
And if the bogus numbers were timestamped and logged then when the bad card numbers are used (and bounced) one could use the bounced transactions to build a map of where the crooks were on any given day.
NZL Govt Rushes Thru Controversial Anti-Piracy Law
What about corporations?
There are corporate companies such as those that use open source in violation of the license or data-harvesting companies that are likely to have three infringements an hour - are those companies subject to this law? Could Google, if it were found to have 3 violations, be knocked off the net for 6 months?
The Politics of ICANN
The referenced article is somewhat incorrect - word is that the governments aren't asking for the power for one country to veto ICANN. But it may prove that the governments are doing what they do well - using euphemisms to cover harsh intent.
ICANN pulls about a $1,000,000,000 (one billion) USD every year out of the pockets of net users in the form of fiat "registry" feels, i.e. about $7 per name per year to Verisign. Given that we are paying this much to get so little, we do have a right to dig deeper into what this expensive organization is actually doing...
At the end of the article we hear an ICANN employee repeating ICANN's mantra that ICANN assures the stability of net identifiers.
That description is false.
ICANN spends 99%+ of its effort on matters that have no reasonable affect on the stability of domain names or IP addresses, that is unless one includes trademark protection into the definition of stability - which is something for national legislatures, not a private body that purports to promote technical stability.
There is a cure to the common ICANN - which is for people to construct competing, consistent DNS roots. Those would contain all of the top level domains that ICANN recognizes - and perhaps some boutique ones as well - but would be outside of the ICANN mandate.
The word "consistent" is important - it would be bad if people resolved names and got surprising answers (sort of like the bad Hungarian-English dictionary in the Monty Python Tobacconist sketch.)
There is no technical way to prevent people from setting up competing, consistent roots. Nor is it unlawful. And it is often done in stealth by ISP's, smart companies, or individual users. DNSSEC does not affect competing consistent roots, but will require them to have their own root keys (subsidiary TLD keys aren't affected.)
Recent events - political in North Africa and natural in Japan - suggest that having a local ability to establish a DNS root could be a valuable tools to help speed healing of net communications when the net is torn by those events.
Long ago I suggested to ICANN that they get a monthly report from every top level domain of the top 10% or 20% of the second level names by query volume. From that ICANN could produce a Knoppix-like DVD that could be booted up and would contain a pre-populated root server with the familiar TLDs and those top 10/20% of the names. That sort of thing could be used to help kick-start local communications recovery after a natural or human disaster. But ICANN said "no, not our job".
US Seeks Veto Powers Over New TLDs
The point you make is valid - there isn't a lot of clear "need" for new top level domains.
But then again there wasn't a "need" for facebook.
It is simply a matter of allowing people to do what they want to do - or in the case of businesses, allowing people to hope to make some money (but more likely to lose it.)
If we limit the choices that others can make then we ourselves become censors.
US Seeks Veto Powers Over New TLDs
ICANN is no longer operating under the old agreements (which went under various names) and is now under an "affirmation" that amounts to an amicable and somewhat supervised divorce between the US gov't and ICANN.
ICANN is on its own, except for that has duties under a zero dollar purchase order to supply "IANA Functions". But that,although it lacks definitions, has always been considered somewhat separate from the domain name issues.
There is an amusing twist - ICANN is a California corporation and there is an old never-repealed law in the California Corporations Code that possibly defines a corporation that takes direction from a foreign government to be a "subversive organization". (See sections 35000 through 35007 of the Calif. Corporations code.)
US Seeks Veto Powers Over New TLDs
It is merely a religious dogma that the internet requires exactly one domain name root.
One way to fight censorship of domain names is to have multiple roots.
It would be bad to have multiple roots that lead to different answers to the same query.
The solution is to have *consistent* but multiple DNS roots. That way any censorship could be obviated simply by users (or their ISP's changing to an uncensored root.)
The definition of "consistent" makes a difference. Some define it as being absolutely the same. I give relax that a bit to say that if a top level domain (TLD) exists then it must have the same contents in all roots that carry it, but that not all roots need carry every TLD.
(If TLDs have disputed contents than I claim that they are tainted goods and that any self-respecting root operator ought to put a pox on both their houses and carry none of the disputants' versions.)
A side effect of this approach is that, like TV channels fighting for space on cable and satellite provider's, new TLDs can arise and fight for visibility and user share without the need of a centralized authority such as ICANN.
There will, of course, be situations in which abc.example won't resolve in a root that doesn't carry .example. But progress is never perfect - look at the way the telephone system collapsed with the introduction of the touch pad and the revolutionary '#' and '*' keys.
There are other, and much older, honest privacy policies out there.
Google, Microsoft Cheat On Slow-Start — Should You?
Has anyone taken a look at whether Google, Microsoft, et al are similarly pushing on the TCP congestion backoff and recovery mechanisms?
Facebook To Own the Word "Face"
At the risk of being facetious - I guess they will own face to face meetings, saving face, face-offs, face time.
I wonder what Janus (the god with two faces) thinks about this?
New Programming Language Weaves Security Into Code
I forgot to mention that a lot of work was based on Jerry Popek's UCLA Data Secure Unix.
Data Secure Unix was amazingly slow. We could type the "date" command into the shell and when we got back from lunch it would be done telling us what time it was.
New Programming Language Weaves Security Into Code
Back in the 1970's at System Development Corporation (SDC) in conjunction with groups at SRI, RSRE (in the UK), and elsewhere we were doing a lot of work on provably correct systems, including operating systems.
(The notion of "correct" was limited to a security criteria - a correct box did not need to work, only that it met the security criteria.)
We used languages such as Ina Jo and Pascal filled with lots and lots of formally shaped assertions about explicit and side effects.
This was moved down into hardware through the use of capability based hardware, such as the Plessy SS 20(?), the IBM Sword (not the newer IBM thing by the same name), the Intel 432, and other hardware that never saw the public light of day. (Those who funded these projects were not fond of the public limelight. and some of this work is not easy to find on the web.)
I did some papers about how one might build a debugging system for this kind of secure software - debugging tends to cut through security walls - but I have never seen those on the public net.
Microsoft Applies For Page-Turn Animation Patent
Cartoons showed this sort of thing back in the 1930s - I am sure that with a bit of digging we could find Mickey Mouse of Bugs Bunny flipping through the pages of a book.
And there are more than a few movies from the 1940s and 1950s that have their leading credits done with a visible hand turning pages.
There is nothing novel about this idea and it is something that is rather obvious even beyond the people who build computer graphical interfaces.
US Plans Cyber Shield For Private Companies and Utilities
The net has huge tides - but unpredictable ones such as the traffic burst that happened when Michael Jackson died.
Those traffic shifts, along with the introduction of new technologies (such as IPv6, cloud computing, and smaller things like the next twitter) will create false positives.
And an attacker, knowing that there are these bursts fairly frequently and that during them there will be false triggers, will time the launch his attack so that it occurs during or shortly after one of those events.
Personally I don't think NSA has the chops to do this monitoring job. Why? Because to do a good job a lot of data needs to be correlated and NSA, if anything, is very unwilling to share its data with others who may also be watching - like ISPs and power companies or just those of us chatting on mailing lists and noticing that weird things are happening.