After the Sun (Microsystems) Sets, the Real Stories Come Out
Definitely true for the mainframe folks. Lots of people don't realize that virtual machines have been around for ages. Take a look at CP67. There's actually piles of cool and interesting architecture stuff to read with really interesting ideas that were tried.
One thing to remember about Sun is that on the server-side they really got a huge break when SGI (Who had just bought Cray) sold off the Cray 64 SPARC processor server to Sun. That became the Sunfire 10000. Which seems kindof insane in retrospect - handing one of your competitors the design for a 64 way server that paves the way for them taking a huge chunk of that lucrative market...
On the 'pretty fish' versus eat them front - I often think of it this way: You have a small team that develops a great product. At some point it becomes clear it's a great product to the business folks and they start hiring more and more business folks. Eventually you end up with lots of people all being supported by the original design. The relationship can be beneficial in the best case - the designers/engineers almost certainly don't want to do marketing and other stuff, they want to design and engineer cool new stuff. Someone needs to ensure there is enough capital for them to continue doing that, so you have the business and sales folks making sure folks buy the cool stuff. There are some common things that seem to happen though:
Sometimes the business guys kill off the engineering team. Almost certainly this is justified as reducing expenses, so improving share holder value. The problem is they can continue to extract revenue from what the engineers and designers they fired created for some time, but eventually that stuff starts to lose value. So the business folks panic and use more and more inventive reality-distortion to try and make it look cool, but eventually everone realizes that hey - this OS version 16.3 isn't really any different than version 7 - it turns out that the 'fab-gadget released in version 16 aren't really that fab...'....
It's kind of like the music industry where you have an artist make original music and then this massive pile of people who are all dependent on the artist best case it works out for all, worst case the artist gets screwed while the hangers on retire.
Of course the business folks reading this would say it's an incredibly arrogant short-sited typical engineer kind of view.
The truth is somewhere in the middle.
After the Sun (Microsystems) Sets, the Real Stories Come Out
I'm going to have a go at explaining to readers how it 'felt' to use a workstation. I have a friend who experienced the same thing working on Apollo workstations too.
There was this feeling - I can best describe as being like what many people report they had as kids with home micros. You woke up and here was this awesome machine that just begged to be played with, have hardware added to etc. It's an awesome feeling of discovery and exploration and possibilities. It's like the feeling you can have if you grab a nice big piece of blank paper and a pen. You can write whatever you want on it, draw on it, calculate something on it...
For me - and other folks who had access to workstations it was just like that feeling. Suddenly you had this machine that was fast, had a great display, a great operating system - SunOS 4.1.3 . the machine was there and all that compute + display + disk was there for YOU. It wasn't locked up in some server some other place and you weren't competing with everyone else.
Later on Sun came out with some really cool things too. Anyone else remember NeWS? That was pretty cool....NFS for as many problems as it has is still actively used all over the place.
Why did Sun die? They died because they stopped doing what they started doing. The actual model for Sun in the early days was they would take a standard Unix and build a workstation (or server) wrapped around it. They actually used to say that they weren't going to lock people into their system - they would make their system open - and compete based on having the best product. Think about that for a minute. They were saying 'We wil build the best damn workstation, and you will buy it because it's the best damn workstation'. Now you can argue if the SPARCStation 1+ was better than an Apollo or a MIPS but as a business strategy it's hard as a consumer to complain about it. It was a massive departure from what DEC did.
Fear of Thinking War Machines May Push U.S. To Exascale
One thing that bugs the hell out of me about these things is that invariably someone when asked about safety says 'We can predict what it will think'. If you build an AI, and it achieves the singularity then by definition it's more intelligent than humans. Saying you can understand it is like saying that you can teach a dog quantum mechanics.
It's so clearly insanely dangerous that I cannot understand how any person who is even remotely intelligent can believe such a thing is remotely safe.
God knows what it will think. For all you know it will decide all organic life should be transformed into meat bricks to build a huge rotting igloo on the equator that somehow represents the fourier transform of yodelling by obese short people.
Iain Banks Dies of Cancer At 59
Agreed, Consider Phlebas is an outrageously awesome read, it's one of those books that gets the neurons going on so many levels.
Iain M Banks will be sorely missed by those who took the time to read his works.
Ask Slashdot: What To Do When Another Dev Steals Your Work and Adds Their Name?
Interviewer - "We checked the source code cited, and your name isn't on it?"
You - "Thanks for checking the source code, that was work for hire, so it's owned by the company I wrote it for, so while I'm disappointed my name was removed from the source, they own it so they decide, I can cover some of the features if that would help?'
The above shows that you clearly understand work for hire is owned by the entity that hired you. You expressed your personal opinion while remaining professional about what happened, and providing a reasonable way to prove you at least understand the code.
If they go so far as to say you lied, then do you honestly, really, want to work for them? Do you want to be dealing with them when you submit your bill?
If they approached this more professionally and said something like 'Oh we could see how that could happen, maybe you can describe the challenges in that software and the solution' then you should be able to convince any reasonable person that you at least grok the problem, and explain your solution.
They can then follow up with another question, and you've avoided the pain.
We've all had interviews where the interviewer was just an incredible jack-ass. They may be intimidated by you, they may be just an incredibly insecure person or having a terrible day and acting poorly. The best way to act if at all possible is always to be professional. Give your answers, they can take them or leave them.
Remember this part if you remember anything. You are interviewing them just as much as they are interviewing you. Yes you have to pay your bills, and feed yourself (and possibly your family), but don't go into this from a position of weakness. You are a valuable commodity, and it's their job to convince you to decide to spend the finite allotment of time we have during your lifetime working for them just as much as you may want the job.
Many technology professions and engineers are uncomfortable with negotiating. Don't be. If everyone in IT could learn that one lesson, that being hired whether it's contract or full-time is a negotiation goes a long way.
If you are dealing with a less tech-savy more 'business' orientated person you will win points (even if grudging) that "Damn this technology person can actually negotiate and isn't a nerd who would work for star-trek lunchtime showings"
If you are dealing with a more tech-savy person they probably won't be focused at all on the business side of things and you can discuss shop talk - discuss honestly some 'pain' (without dissing any company or individual) and often you can throw in a small amount of humor. When interviewing for a technology position it's a big plus to meet a candidate who can admit things that were tried that were disasters that they worked through.
If the interviewer has any scar-tissue at all they will understand you have been in the trenches and had things go wrong, and you can explain how you worked around it. The solution may not have been pretty or elegant but it got you and the company you were working with through the problem.
Someone who can think of their feet, evaluate what's going on, make a decision and adapt to save the ship is worth a ton. There are so many people in technology who search for silver bullets and are so enamored with X, whether it's hardware or software architecture that showing this helps hugely.
ReactOS 0.3.15 Released
It's plan 13
Plan 1-3 were various timesharing OS
Plan 4 was DOS
Plan 5 was Amiga OS
Plan 6 was Windows 9X
Plan 7 was Windows NT
Plan 8 was OS X
Plan 9 was Android
Plan 10 is being revised it was going to be Vista & Windows 7 but Windows 8 hasn't taken off
Plan 11 is Under NDA
Plan 12 is even more NDA buried
Plan 13 are 'far out plans if other plans fail'
Plan 13b is to create a NT clone and sell^h^h^h^hNOW FREE! on CD's^h^h^h^hDVDs^h^h^h^hDownload
Mars Explorers Face Huge Radiation Problem
It'll be Orange flavored, but it'll have a little badge that says 'Now Boron Enriched!'.
How Did You Learn How To Program?
I'd been into computers from a young age - and after looking over things like ZX-81's, VIC-20's, Let alone Drool-Worth Atari 8 bits,
Dad being a Ham decided to buy a second hand computer that was for sale.
It was a C1P. Best damn machine he could have bought. It was the original 'hackers' SBC. I mean it came with full schematics and pretty much everyone who had an OSI hacked on it, and especially the Superboard II/ C1P folks. My dads work was getting rid of an ADM-1 (Yes - 1! - uppercase only) terminal, we populated the RS-232, and a few months later I got my hands on a Decwriter LA-36.
Getting the OSI did a few things - it got me into hardware from a young age. Want joysticks - have at it with the soldering iron wiring to the polled keyboard. Want faster cassette... See those traces hanging off that divider - have at it.
A standard mod was a 2x performance increase. Just hope that your 2114's can hack it ( Are those 550nS parts really 550nS?) find the divider for the cpu, cut the trace, hook in a switch and hey presto - instant 2x. Oh no your tapes don't load? You should have saved them at 600 bps after you wired in the baudrate switch.
The 'monitor' was called SYNMON. Two values on the screen. This thing was Primitive. Think of a memory mapped display version of an 1802 or KIM-1.
0000 4C LDA
A1 OSI Character graphics white square
0F LSB of address
D2 MSB of address - Video mapped memory
Hit . again
Punch in 222 again
So you see
Hit G for Go.
C gave you a cold start and you got...
Microsoft BASIC V1.0 rev 3.2
(c) 197? Micro-soft - memory is fading...
7423 bytes free
LOAD and SAVE just turned ON the redirect from the keyboard to the serial port. So loading a basic program was like watching someone type it in at 30 CPS. Save was like watching a slow 30 CPS list.
Ethernet Turns 40
Ugh tired brain syndrome - off by an order of magnitude!
100 MB/sec over 40 GB/sec = 1/400th of the memory bandwidth. So a modern machine with 40 GB/sec of memory bandwidth and 1 gigabit ethernet has 1/80th of the network IO of the old PC.
Ethernet Turns 40
Consider 10 mbit ethernet on an early PC. It could push maybe 500K/sec. Figure a 4.77 Mhz 8088 had about 2500 K/sec of memory bandwidth. Ratio - 1/5.
Now imagine if the common ethernet on your machine was 1/5th of the Memory bandwidth. Take a PC with 40 GB/sec of memory bandwidth. Imagine having 8 GB/sec over ethernet at a reasonable cost.
Imagine the things we would be doing if we had that. Instead we commonly have 100 MB/sec. 1/40th of the memory bandwidth.
Just think for a minute about how different things would be with a network that pushed 8 GB/sec. You could swap over the network, you could do all kinds of cool things.
So while I really *like* ethernet I wish we hadn't slipped so far down the slippery road of lousy I/O/
Why We Should Build a Supercomputer Replica of the Human Brain
If it does become sentient or at least passes the turing test, will you kill it? If you do, and it passed the turing test you are killing something that can at worst simulate intelligence at the level of a human.
Do you give it a natural method of decay like a human brain? If you don't do you just keep it running forever or flip the switch.
Ask Slashdot: How Do You Deal With Programmers Who Have Not Stayed Current?
A good engineer / programmer understands fundamental things that matter far more than the latest language trends, if you are dealing with non-trivial problems.
If you could magically have Knuth, Dijkstra or Hoare look at a hard problem do you think they'd do a lousy job? If you do, you need to go back and read the stuff those guys have written.
Having a soup of buzzwords is fine, but it is worth less than nothing if they don't understand how to look at a problem and determine the dependencies of the problem. The best case for the geewhiz person will be that they by-chance end up using a pattern someone else came up with that maps. The worst case is you get something that looks nice and seems nice but is neither correct or robust.
Concurrency didn't start with cheap desktop mult-icore microprocessors. Dijkstra wrote his stuff about Sempaphores in the mid sixties.
There's a hell of a lot to be said for sitting in a quiet place and thinking through a problem perhaps with a pencil and piece of paper. If you've never worked on a problem that required you to do that, then either your problems have been easy or you're an unsung genius.
Ask Slashdot: What's Your Company's Marketing-to-Engineering Ratio?
It may NOT be the case at your company, but over the years I've seen total stupidity at many companies with regards to engineering budgets or lack thereof.
Consider an engineer - let's say that hypothetical engineer is paid $100K a year. They talk to their manager - and say 'Hey I if I could spend $20K I could quadruple the number of build,test,debug cycles I get done in a day - they currently take 5 hours on average. 4 hours of that is building and running the test suite.
Many managers would say 'You're out of your damn mind! - $20K that's a car!'. They are, of course, killing the company. Even if that engineer is out by a factor of two, and they can only double the number of cycles they get done in a day - you just basically turned down buying another 'magical engineer' who somehow instantly knew the problem-space, knew the details of the environment, and was instantly efficient for $20K.
The managers are ALSO forgetting is that many studies have shown that ruthlessly limiting the number of people working on a project is one way to improve chances of success. This was directly shown in spades to IBM when Control Data built the 6600 - the total CDC team was 34 people including the janitor.
A wise manager would say - 'Let's talk this through', and check the engineers thinking. Maybe ask another engineer. If it looks good then go for it. Even if it lets the engineer turn those 5 hour cycles into 4 hour cycles - it's paid for itself.
Politician Wants Sci-fi To Be Mandatory In School
There Ain't No Such Thing As A Free Lunch.
Or Beer, as in Heinlein's example in "The Moon Is A Harsh Mistress". Understanding that would be a great thing for every kid to learn. I'm surprised by how many otherwise intelligent adults appear tongue-tied after rallying for some new subsidy when you just ask the simple question 'Fine, what will you cut to cover the cost?'
My brain nearly imploded once when a college educated friend said 'We should subsidize solar panels on houses' and I replied 'Groovy - So what would you cut?', and they said 'Nothing - Just print more money'
The Eternal Mainframe
Good on you for doing the exercise! I agree dhrystone isn't perfect but I wanted a benchmark thats relatively easy to find numbers for a machine.
For the 286 there's about a 10:1 ratio. Your ratio is 230:1. So let's just scale it and see what that would have looked like: 4000 dhrystone mips / 230 ~= 17 K/sec.
So effectively your I/O to compute ratio of your laptop is about that of a 286/16 with a floppy disk.
The bad news is it gets *worse* than your laptop with typical x86 servers. I/O doesn't scale like compute. Take a x86 box people might throw VM's on. It's probably got worse I/O versus compute than your laptop or a 286/16 with a floppy.
Totally right about latency too. That's an even more depressing ratio.
The Eternal Mainframe
What's worse is that they don't understand floating point.
They don't understand that in floating point you can totally have a situation where:
d = a + b + c != d= b + c + a
The Eternal Mainframe
Try finding out yourself. Ask some kids some simple questions to the new kids:
Try asking them:
What's the memory bandwidth of that x86 desktop or laptop roughly? Special points if they break out cache.
Ask them how many dhrystone MIPS (very roughly) that uP has.
Ask them the ratio of the main system memory bandwidth to MIPS.
Ask them the ratio of the main system memory bandwidth to the I/O storage they have.
They just never get exposed to this stuff. They just have no reference. Now ask them to compare them even to a regular 286 era ISA bus PC: I'll even give you some numbers.
286/16 ~ 4K dhrystone MIPS on a good day
Disk (40 MB IDE on ISA) ~ 400K/sec
The Eternal Mainframe
It's great to see these things still around. They are really fun to read. I actually am a bit of an atari 8 bit fan with some 8 bits I still use occasionally for fun.
The Eternal Mainframe
They weren't always. Some model 360's were pretty decent. The CDC 6600 while called a 'super computer' nowadays was really a 'Large Computer'. It was a mainframe. The problem with mainframes is the same problem with every computer out there. The latency wall. There were only a few companies that really pushed the physics. That stuff has stopped at the 'system' level to a large degree. You see a few companies playing with the interconnect topology but it's not really pushing the physics stuff.
If you take the ratio of compute to I/O of any typical modern server it's horrendously bad. To anyone out there who thinks their x86 rocks - a few simple questions:
1) What's the ratio of memory bandwidth at various levels to I/O bandwidth? Compare that to a Mainframe from the 60's.
2) How long on a typical server would it take to swap out all of memory? You can use SSD if you want.
Hint) You will find 2) is many seconds to minutes for a decent sized x86 server even with SSD's. That IBM mainframe could maybe swap out all its memory in less than a second or a second or two.
The Eternal Mainframe
Take someones description of their cloud computing service and compare it to the concept of the computing utility - like the MULTICS people talked about, it's pretty damn similar.
Some idiots will claim it's not - that they can get to their data! Which is total garbage because while technically they might be able to get to their data, it sucks to empty a swimming pool via a straw which is what the bandwidth of an internet connection is like when there's a tonne of data in the cloud.
So then they will say 'run your queries in the cloud', well that's awesome until the problems don't map efficiently to the topology the cloud vendor went for.