top If Java Is Dying, It Sure Looks Awfully Healthy
Do people still write applets? I thought they were actually dead. Any motion is the decomposition taking place.
Java is a server language of choice for a lot of developers, and if not Java the language, then at least a JVM language (Scala, Groovy, Jython, etc).
top WRT virtualization:
Windows 2012 Server boots first, then after a bootstrap the Windows 2012 O/S migrates out of Ring-0; so it eventually ends up as a Type-1 but it doesn't boot as a Type-1. (The same is true of Xen, for example; it boots as pure Linux and then Xen takes over Ring-0)
ESXi is literally a bare metal and boots directly into the hypervisor.
Workstation and Fusion, once spawned, are hosted applications, but still have direct hardware access to hardware if you're on a box with hardware-assisted virtualization instructions (ie, intel vt-x).
So the idea of Type-1/Type-2 is sort of dated anyhow as a division.
about a year and a half ago
top WRT virtualization:
... "my MacBook Pro has 9 VMs on it right now, 3 of which are powered on, and I build clouds for a living" choice?
... "Are you kidding? My VMs have VMs in them." about a year and a half ago
top Amazon: Publishers Strong-Armed Us On E-Books
Pricing can be based on utility, rather than cost; see
http://en.wikipedia.org/wiki/Utility. I completely agree with you in principle, but I've found I am now just buying ebooks, even when I could get a paper copy for less, because:
- I get it instantly
- I tote my entire library around on a device that weighs 11 ounces - I can read on multiple devices and it syncs my position automatically
And I recently gave >1000 books to the library when moving, so I know that despite my fears that Kindle as a platform might die, I'm not necessarily keeping all my books forever. (Although since my daughter is 11 and I'm now giving her books I bought when I was a kid... there is definitely some merit to it. If anything, this is the one thing that keeps me occasionally buying paper books; the loaning and hand-me-down factor.)
I'll be honest - I hate myself a little for capitulating, because on principle I completely agree with you. But I also drop $6 on triple lattes frequently and I just feel too busy to feel any rage over a few bucks here or there. I applaud everyone who goes for the cheaper option even if they'd prefer the e-book at that price.
The equivalent crap happens in movies as you point out. HD movies on iTunes being $15 instead of $10, or $20 instead of $15, say, seems fairly absurd, since the difference is perhaps $.02 of bandwidth. TV shows even more crazy, being $3 instead of $2.
The reality is, publishing is a completely shitty business. Macmillan's parent company (a publishing conglomerate) made a whopping 6.7% on 2.1B Euros in 2005 (BEFORE taxes). (2010 they were up to 2.25B euros)
That's not exactly rolling in the dough.
about a year and a half ago
top Why Your Users Hate Agile
I've seen actual agile, and I've seen stuff called "agile", which means, "we don't plan, but we do standups". "We're using agile" is a codeword sometimes for "we don't like or even understand SDLC, so we'll use no process and call that lack of process agile."
There is no process. Things fly all directions, and despite SVN [version control] developers overwrite each other and then have to have meetings to discuss why things were changed. Too many people are involved, and, again, I repeat, there is no process.
Not even remotely describing agile. Her "has 17 years of web development experience" jibes with my experience in a 5-man web shop where the term agile was literally a euphemism for "no process", and there was a COO asking for 6-month gantt charts despite the "agile" label; vs a stint at a top-3 software company where we had agile tools (ie, Rally), everyone got trained on it on our team, we had a very defined process (including using gitflow for branching and a review process pre-merge), and a full-time scrummaster.
I don't even think this is
really giving agile a bad name, because I think anyone who has experienced both (or, say, just real agile) could tell the difference easily. about a year and a half ago
top Ask Slashdot: Safe Learning Environment For VMs?
So, I co-wrote
this book on virtual security and am a former VMware Cloud Solutions Architect. And I'll preface this advice by saying that, if you want to talk more in depth, feel free to ping me. First initial, last name at gmail will work. (The email I have attached to slashdot I glance at occasionally, but it gets almost purely spam and so I'd likely miss anything.)
From my perspective, the first question is which hypervisor to use:
- VMware is mature, you can get a free license for the base hypervisor (which is quite feature rich; this is no trial product) for up to 32GB per physical box, is widely used. If VMware remains as relevant in the future as it is now, it's actually a very solid skillset to have. - If you have physical hosts over 32GB, VMware ceases to be free - Some features require more advanced VMware stuff, including vCenter server, which isn't free - for example, VMware's live vm migration feature (vMotion) - VMware is almost entirely closed on the internals; hypervisor is closed source (other than a not-useful-for-your-purposes "open source" bundle that contains their modified GPL code only); they have a bunch of APIs for internal functions (ie, tracking changed blocks on the virtual iscsi devices, for example), but those are generally restricted to partners; so if your students want to actually hack the virtualization layer, they can't. Then again, letting them do so wouldn't really be safe. - On the other hand, VMware layers do have nice APIs that are reasonably accessible for doing non-internals stuff; things like powering VMs on and off, changing their allocated RAM and cpus, etc - VMware has a nice set of tools, including CLI tools, which work well even with the free versions, that can allow you to move virtual machines in and out of specific hypervisors (not while the VMs are powered on), and into and out of VMware's desktop products (Workstation for Windows and Linux, Fusion for Mac). (google ovftool for the cross-platform CLI tool, for example; it can import/export to/from ESX, vCenter Server, Workstation, Fusion, and vCloud instances) - VMware has a nice set of tools for snapshots and backups, even on the base hypervisor; for example, I have a personal ESX box at a provider and I use this tool to back up the VMs back and forth, which can be done from outside the OS without powering the VM down, and it's free. - I found using some things I'd think of as mandatory for a lab environment (ie, thin provisioning) were just built-in on the VMware side and required a fair bit of extra work and added extra wrinkles
The virtual networking on VMware is dramatically more mature from my experience; my experience with Xen & KVM is now dated (it's been 2 years since I was in the thick of writing that book, which was the last time I was really in the thick of exploring the open-source hypervisor networking bits). I found that depending on the version of the hypervisor OS, which hypervisor, which kernel, which guest, etc, you could fall into all sorts of traps. I had some examples in the book where I showed, for example, generating and applying ebtables configurations to the host OS (the Xen Linux hypervisor OS) to block forged frames from coming across the bridge from one of the guest Linuxes, for example.
Compare that to the VMware side, you could in theory wire up everything to dumb hubs, even, and enforce network separation at the hypervisor layer with VLAN tags applied to the portgroups where you attach VMs. (Warning: not suggesting you blindly do that; but VLAN enforcement on the VMware side is fairly rigid if configured in a good way.)
My own book is a fun read for some of these concerns, although
Haletky's book is probably the canonical work on the subject. (Although it is -slightly- dated from being a bit old, it is still a wealth of great information, and it was a huge help to me as a primer when I first joined VMware.)
Depending on how far you want to deep dive, my second choice might be Xen+Eucalyptus; if you could front-end your hypervisors with Eucalyptus and build an internal cloud, you'd also get your students one foot on the road to playing devops with AWS. (There are plenty of VMware clouds out there, but I don't know of any offhand that have the equivalent of the AWS micro tier, which would let students even occasionally deploy their boxes to the net.)
One final consideration is that VMware actually gracefully does nested virtualization; you can run ESX inside ESX, and you can run Xen inside of ESX, and they generally function well. The
Xen FAQ implies Xen supports it, but I'm unsure if Xen can nest VMware or KVM for variety; I can say from experience that I had VMware, with Xen inside it, with a guest OS inside that, all on my laptop just fine.
Good luck! This is super-fun. I will say: don't overlook the value of the actual virtualization layer experience! It is currently far harder to find solid virtualization & cloud engineers than it is to find a Linux admin. The rise of virtual appliances and infrastructure as an extension of code makes me feel like the devops & virtualization skillsets will remain in strong demand, and operating systems may be simply seen as containers for applications.
about a year and a half ago
top Ask Slashdot: Safe Learning Environment For VMs?
VLANs are not for security! Any two things plugged into the same switch, whether virtual or real, can talk to each other if sufficiently motivated.
As you pointed out below, VLANs in general are trustworthy when properly configured with a proper switch. I did nothing but netsec work in the late 90s, and everything was airgapped; we'd never have frames from two networks on the same wire. If you wanted to cross security zones, it was at L3 on a firewall and to different wires and switches.
On the other hand, it seemed like back then a new practical way to defeat VLANs was coming out every other week, so this was a wise precaution.
That said, keep in mind that VMware also affords some additional security in terms of VLANs. Physical switches have to connect to virtual switches to interact with the VMware layers (either the hypervisor for control traffic, or with the VMs for VM traffic), and the hypervisor itself will enforce a lot of things. On a VMware vSwitch properly configured:
- VMs can't enter promiscuous mode, change their MAC address, or forge transmits with the wrong L2 address
- QinQ frames are discarded - The hypervisor itself will determine which virtual nics on a vswitch should receive copies of a frame, depending on which VLAN tag is on a portgroup - Guests can't send tagged frames if their portgroup is set with a VLAN; you have to specifically configure a trunk on a portgroup to pass VLAN tags in and out of the guest environment
If the network was homogeneously ESX nodes and administratively controlled network equipment, you could likely enforce security between VMs with VLANs even with a dumb hub.
Obviously, airgapping and single-role wires will create better security than VLANs, because there always remains a chance that an undiscovered bug will allow breaching that L2 barrier, but that's true for everything.
about a year and a half ago
top Senators Seek H-1B Cap That Can Reach 300,000
Realistically, I view an ability to bring in highly skilled workers as a huge boon for us. Tax revenues, technological innovation, business agility - etc. People who are really driving technology and innovation create way more value than they capture and they become the rising tide that lift all boats.
But how can you identify them? We all know companies that want to import workers for less skilled jobs carefully tailor the job descriptions to avoid any domestic competition, don't publicize the jobs widely, etc.
Salary is the answer. We should prioritize H1-B visa imports by salary. The more you are paying the worker you import, the higher on the list they get to be. Any increase in the cap requires a certain number of workers at the top of the salary curve; if your salary would put you in the top 1% of workers in any science or technology field, then come on in; I don't care how high the "cap" goes. As you move toward the middle of the bell curve, the total number of workers we'll import declines. We shouldn't import even one worker below the median salary. I don't think we should move an inch over the current cap unless everyone over the cap is at least in the top 20%.
top Why Girls Do Better At School
But boys are still smarter.
I once read a summary of a study that indicated this is
somewhat wrong. Boys and girls both have roughly the same averages, but boys have a higher standard deviation. This means there are more "smart" boys and more "dumb" boys; but boys aren't smarter overall. It did mean that if you asked, "How many of [gender] have [intelligence at some high sigma]?" it would indicate there were more boys, unless you were looking for people around the median. No idea if this was ever corroborated but I thought it was interesting.
Following the rules, paying attention in class, and kissing your teachers' asses can only carry you so far without real intelligence to back it up. And most of the A-student girls I went to school with were dumb as cold shit compared to me on my laziest B-student day.
Time for a Calvin Coolidge classic:
Nothing in this world can take the place of persistence. Talent will not; nothing is more common than unsuccessful people with talent. Genius will not; unrewarded genius is almost a proverb. Education will not; the world is full of educated derelicts. Persistence and determination alone are omnipotent. The slogan "press on" has solved and always will solve the problems of the human race
I don't see the additional bachelor's degrees or the additional brains as a guarantee of anything. The genius who flunks out of college because he discovers for the first time he actually has to study and actually has no idea how to do it is almost proverbial.
top Ask Steve Wozniak Anything
Apple is definitely not the most powerful company ever. Not disagreeing that some of their tactics (legal, technological, and ecosystem-lockin) are "evil", but their enormous size and profit is currently coming from enormous margins on incredibly popular consumer goods.
Profits != Power, per se. Exxon-Mobile, for example, is deeply involved in government policy, and they (and other oil companies) have enormous sway on environmental policy, military and foreign policy, and incredibly sway in many nations where they have oil production operations.
Meanwhile, arguably Microsoft is more powerful as well; despite their much lower market cap they are deeply entrenched in big areas, including servers (where they are basically the big alternative to Linux), they are a dominant force in the traditional gaming market (b/n XBox and their influence on PC gaming), and they're making interesting inroads in new areas like virtualization/cloud services. They've even managed to make some headway with Bing, rising to the #2 search engine spot. (Where Google still dominates; Google won the Bing challenge 5-0 when I took it.)
Apple is a very unique company though, and they do have certain "powers" that nearly no other company - at least US company - has, but most of them are hard to use for evil. For example, they're possibly the only company left in the US capable of doing the full-platform hardware/software design that they do. The Dells of the world have outsourced
too much of their design to their supply chain and it makes them apparently unable to break new ground. If anything, I think the most likely company to rise up and produce their own hardware is actually likely to be Google, since they're actually willing to do engineering work that isn't purely in pursuit of a hardware profit.
FWIW, any honest comparison of Apple products on a price basis can't really conclude they are "incredibly overpriced". Apple has extreme control over their suppliers - Foxconn operates on a very thin margin, so much so that Apple basically had to directly approve pay raises for their workforce because their margins are so tight. Yes, they do have higher margins, and depending on the model, you -may- pay an extra $100 to $200 that goes to Apple's bottom line. Most of the rest is Apple picking superior hardware. Go check a teardown list.
Richman recalled that Apple amassed $4.976 billion in revenue from the sale of 3.76 million Macs during its previous quarter, yielding an average selling price of $1,323.40 per Mac. He then multiplied that figure by a 28% gross margin estimate for Mac sales from Jefferies & Co. -- which is still several hundred basis points below the company's reported average -- to arrive at a profit of $370.55 per Mac sold.
By comparison, HP’s Personal Systems Group brought in $9.415 billion in revenue and turned a profit of $533 million last quarter. The PC maker's operating margin, which doesn’t factor in overhead costs, came in at 5.66%.
top Ask Slashdot: What Distros Have You Used, In What Order?
slackware -> redhat -> centos/ubuntu, with more thrown in (debian and (ugh) suse)
I now typically install CentOS on servers and Ubuntu as a development VM/desktop environment. (But in ~08 I switched from using a Linux Desktop after 15 years of Solaris and Linux desktops, to using a Mac.)
top Verizon Bases $5 Fee To Not Publish Your Phone Number On 'Systems and IT' Costs
Twice I got reps to list my name as John Doe for my phone number listing. When someone called for Mr. Doe, I said I was speaking. Whatever they offered, I quickly sounded very interesting, and said, "Just one minute, I'll be right back, that sounds great." Then I would set the phone down (not hanging up) and go about my business. Then I simply stopped getting a land line.
top Ubisoft Uplay DRM Found To Include a Rootkit
I'm going to contact my Congresspeople, and ask them to ask the Department of Justice to investigate and prosecute any violation of wiretapping and/or computer crime laws which may have occurred.
top Is OpenStack the New Linux?
So, the general attributes of cloud for IaaS, offhand, are:
- Elasticity; you can provision and deprovision it dynamically and rapidly, and you pay only for what you use (and granular billing to go with it)
- Redundancy "under the hood"; your specific instance may fail, but a cloud service should heal without intervention from a tenant, beyond doing things required by their instance(s) restarting - Multi-tenancy - meaning many unrelated entities can safely share the same hardware with a separation of concerns - API interfaces - Accessibility over a network
Some people would include a lot of other attributes, such as "linearly scalable" (ie, 1 instance = N units of processing, then 2 instances should = 2N units of processing).
Ultimately, the promise of cloud computing is to deliver just as much computing as you need, where you need it, only for as long as you need it, with ~0 setup needed on your part. If you've ever provisioned servers, you end up asking something like:
- Do I need shared, dedicated servers, or my own colo space to set up?
- Do I need routers, firewalls, load balancers, vpn concentrators, etc? - Which things need (for security/role reasons) separation?
And then, what's your timeline when any of those answers change?
Cloud handles application scale-up and scale-down more gracefully; this is one of those things that's been driving virtualization in the enterprise for a decade; enterprises can consolidate servers, and old applications can share a tiny slice of hardware but still not be end-of-lifed, rather than needing their own server to run on. Applications which have a sudden burst of popularity could conceivable scale up massively - imagine a world where no one is ever slashdotted.
Virtual networking can give every application its own isolated network with its own firewall policy, using ~30mhz worth of cpu.
Anyhow, this sort of thing has driven virtualization in the enterprise for a while because of capex cost. The average utilization of non-virtualized servers is, iirc, ~30%; post-virtualization, it's 80-90%+. That means enterprises that use virtualization simply spend less than half on servers and the costs of maintaining them. Then there's opex. Rather than the complex provisioning associated with sizing, installing, and maintaining bespoke computing for every user/org/BU, the IT process can be streamlined to having a one-size-fits all provisioning, and the virtualization/cloud layer can carve it up dynamically. You have way fewer people needed per piece of hardware. To say nothing of how the resource sharing makes it self-healing. Physical server dies? The virtual machines that were on it power up automatically on a different machine. (In fact, VMware FT can actually add an application-independent hot-failover to any x86 server; physical hardware dies, the shadow copy immediately resumes running with the full state on the failover hardware)
Thin clients hitting servers (ie, dumb X terms hitting mainframe-type servers) was a similar concept in the sense that you were time-sharing resources, but this makes a similar arrangement possible without operating system dependencies, with application portability (ie, I can move a virtual machine from one cloud at one provider, to another cloud at another provider, about as fast as the bits from the virtual disk can copy over the network - and of course, all the empty space doesn't need to move).
Not really even touching on what private/hybrid cloud means to an enterprise; but suffice it to say, there's a reason why nearly every company in the Fortune 500 has some users pulling out corporate cards and buying compute from AWS; and why they'd like to supply that same experience to their users on a private cloud platform.
top Is OpenStack the New Linux?
Funny, we just hired two COBOL programmers at $80K each to maintain some legacy mainframe systems.
This reminds me of a guy I knew in ~1994, who was griping that all his experience was in COBOL, and after getting laid off from making $75k/year, he couldn't find another job. At the time, I was in college, and so I wasn't really familiar with the idea of keeping your skills updated...
When cloud technology can permit hard core data entry, say for insurance records or the like, then I'll worry. But until then, throughput is more important than an app being able to run from wherever in the cloud. Besides, in my line of business. We don't run apps. We run programs that process millions of secure transactions. We have data entry clerks that key documents and data that can't be captured electronically.
You would probably say that we have our own private cloud. I would say that we have our own methods to allow secure access to our internal systems. By the way, I would predict that there will be COBOL programmers still programming even after cloud computing has been replaced with the next marketing hyped phrase.
So I don't know that I would recommend cloud for you; there are reasons to use it, and reasons not to use it. As the technology and ops experience matures, it will be easier to adopt - basically like any tech. But for almost everyone, there are real benefits. Both capex and opex; and some people are using cloud in a way that their capex savings is ~0 (or negative) but their opex savings is huge. (See: Netflix running their entire infrastructure with 3 admins) Program ~= App. I file my expenses through an Oracle app, that runs in a cloud, that automatically fetches corporate card transactions from Visa, and lets me roll them into an expense report.
I'm one of the authors of
Securing the Virtual Environment, and my co-author is a QSA, and one of the points of writing the book was to talk about the fact that cloud *can* be secure and can be compliant. (Although in the case of a public cloud, obviously compliance requires underlying compliance by your provider, as well as your own processes) Of course, there are a bunch of risks, too - but there are, for example, cloud services that have passed HIPAA and FISMA audits.
In short, cloud is more than just a buzzword; it's an evolution in the technology that powers IT. I'd say it's more evolution than revolution, but it is more than a buzzword.
top Is OpenStack the New Linux?
There's a term used called "cloudwashing" that covers inappropriate use of the term cloud, but cloud technology is real and every company in tech is pouring money into this transition.
Anyone who has worked in IT in large enterprise has seen the benefits of virtualization in action; there's an enormous amount of capex and opex savings, and VMware basically dominates the market. There's a reason 99%+ of the Fortune 500 have an ELA with them.
The same principles behind that revolution are now reaching into the public space, and looking to blend the private IT compute farms with public cloud resources as well; plus more apps being deployed as SaaS, and more apps being developed on PaaS stacks; all the technology of big data (eg, Mongo), messaging (eg RabbitMQ), and so on just form a virtuous circle with this trend. Apps become more able to run in generic clouds without requiring very specific hardware control, and thus IaaS clouds become more attractive.
If you're in system, network, storage, or security administration, or IT of any sort, and you're not learning about this, you're basically a COBOL programmer waiting to be put out to farm.
top Is OpenStack the New Linux?
It's meant to be syllogistic.
At this point, though, OpenStack is still pre-1.0, perhaps equivalent to Linux circa 1993. Whether it can polish up and continue to deliver what is needed is yet to be seen.
The impetus behind cloud right now means that this will be a lot more high profile than Linux was in 1993. There's all sorts of politics (eg
Why Citrix Left Openstack) at play, and no one has an OpenStack cloud of any significant size running. OpenStack has been tooting its horn for 18+ months and yet the most advanced player is really just going into production. Rackspace clearly sees OpenStack as an avenue to leverage outside development in an effort to go after Amazon, but whether that makes it viable for other people - and thus creates a rewarding ecosystem - has yet to be seen.
top Is OpenStack the New Linux?
OpenStack isn't a distro. It's a collection of utilities for virtualizing and managing compute and storage resources to build clouds. Putting Apache, PHP, and MySQL onto a linux box doesn't make the LAMP stack "Linux" any more than putting OpenStack services (Nova, swift, etc) onto a Linux distro makes OpenStack Linux.
top Ask Slashdot: Tips For Designing a Modern Web Application?
I have two suggestions that are close to staying with Java:
(1) Check out Spring (http://www.springsource.org/); Spring has a bunch of goodies that make developing web apps easier, and the guys from spring (Adrian Colyer, Richard MacDougall) are thinking really hard about scalable web services. This is a foundation that will let you write in Java but still be prepared for the future.
(2) Even better, don't go with Java, but leverage some of what you learned and pick up Scala. See
http://www.scala-lang.org/, or pick up Martin Odersky's book. Think of Scala as what Java would be if someone who appreciated terse, expressive syntax and great convention redesigned Java. Odersky wrote a reference JVM implementation while at Sun, and Scala compiles into Java bytecode and can directly use Scala libraries. (My first Scala project, for example, I used unboundid's LDAP libs directly in my Scala code.) Odersky along with some other luminaries (Viktor Klang, Paul Phillips, etc) have formed Typesafe, and are producing Scala the language + Akka (an actor framework) + Play (a web framework). Outside of play, many people are huge fans of Lift, and it does have some magic that no other framework has.
Remember how you said "modern" web application? Well, Scala supports functional programming, and you can fix functional and imperative code in the same application, which means you can support massively scalable sites by writing clean, idempotent code where needed.
If all this sounds bad, then I'd recommend Django+Python, as it is, imo, the best way for a relative web novice to produce decent code, and the amount you can do with a few hours reading docs and then digging in is shocking.
top Ask Slashdot: What Type of Asset Would You Not Virtualize?
This is generally incorrect advice at least for a VMware environment. Best practice is to virtualize vCenter Server and its database, and use them with HA/DRS. The way that vCenter interacts with ESXi (the hypervisor it administers), ESXi is "preconfigured" with HA/DRS rules; if the server running vcenter does down, a different hypervisor will actually bring the management VM back up. (In other words, the vMotion and HA stuff, while CONFIGURED by vCenter server, doesn't not need the vCenter server online to actually carry out an HA restore.