Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Building an IT Infrastructure Today vs. 10 Years Ago

Soulskill posted about 10 months ago | from the more-clouds-baby dept.

IT 93

rjupstate sends an article comparing how an IT infrastructure would be built today compared to one built a decade ago. "Easily the biggest (and most expensive) task was connecting all the facilities together. Most of the residential facilities had just a couple of PCs in the staff office and one PC for clients to use. Larger programs that shared office space also shared a network resources and server space. There was, however, no connectivity between each site -- something my team resolved with a mix of solutions including site-to-site VPN. This made centralizing all other resources possible and it was the foundation for every other project that we took on. While you could argue this is still a core need today, there's also a compelling argument that it isn't. The residential facilities had very modest computing needs -- entering case notes, maintaining log books, documenting medication adherence, and reviewing or updating treatment plans. It's easy to contemplate these tasks being accomplished completely from a smartphone or tablet rather than a desktop PC." How has your approach (or your IT department's approach) changed in the past ten years?

cancel ×

93 comments

Sorry! There are no comments related to the filter you selected.

Happy Friday from The Golden Girls! (-1)

Anonymous Coward | about 10 months ago | (#45492071)

Thank you for being a friend
Traveled down the road and back again
Your heart is true, you're a pal and a cosmonaut.

And if you threw a party
Invited everyone you knew
You would see the biggest gift would be from me
And the card attached would say, thank you for being a friend.

Re: Happy Friday from The Golden Girls! (-1)

Anonymous Coward | about 10 months ago | (#45492151)

Lawl cosmonaut

You don't build it (4, Funny)

dyingtolive (1393037) | about 10 months ago | (#45492075)

You just put it all in the cloud brah. My boss assured me it'd be okay and he got his MBA from

Re:You don't build it (4, Insightful)

Archangel Michael (180766) | about 10 months ago | (#45492267)

The cloud is fine and dandy until Microsoft Azure is unreachable for several hours ... again ...

http://www.theregister.co.uk/2013/11/21/azure_blips_offline_again/ [theregister.co.uk]

Re:You don't build it (-1, Offtopic)

Anonymous Coward | about 10 months ago | (#45492367)

Wow! Something just shot out of my asshole! What was it!? Well, I can tell you what it wasn't: Al Capone's submarine.

Re:You don't build it (0)

Anonymous Coward | about 10 months ago | (#45493617)

OMFG, 4channers have mod points. Someone please mod that child back down. Thank you. Guess I'll metamod again tonight and hope I find that comment... some people shouel never have mod points.

Re:You don't build it (1)

Anonymous Coward | about 10 months ago | (#45492385)

Doesn't matter. The only important consideration is that no one gets fired for going with the Microsoft solution.

Re:You don't build it (-1, Troll)

Anonymous Coward | about 10 months ago | (#45492733)

I'm not sure I get the relationship between Microsoft Azure and the cloud; except in that "cloud" is a badly defined term.

The most fundamental change in the last 10 years is that nobody building serious new systems uses Windows in them. Yes; I know that lots of people are spending plenty of money on Microsoft; there is a huge mixture of legacy systems and incompetents. Sometimes it's difficult to tell which is which (looks like a legacy system but is actually there because an incompetent is unable to migrate off MSSQL to MirandaDB and tells you something about his "specal needs integrity application").

It is not an accident that all the most successful big companies recently; Google; Facebook; Amazon etc. etc. went with Microsoft free architectures. They all came from areas where there were plenty of competitors, but the total freedom to restructure the whole system is what the real "cloud" is about.

Re:You don't build it (0)

Anonymous Coward | about 10 months ago | (#45493393)

> It is not an accident that all the most successful big companies recently; Google; Facebook; Amazon etc. etc. went with Microsoft free architectures. They all came from areas where there were plenty of competitors, but the total freedom to restructure the whole system is what the real "cloud" is about. ... and these are all internet companies - which never traditionally used microsoft to start with.. (e.g. yahoo was on FreeBSD from the beginning, etc)

going back to:

> The most fundamental change in the last 10 years is that nobody building serious new systems uses Windows in them.

there is a huge difference between a developer-centric, software/internet company, and lets say, an insurance brokerage or similar..

Re:You don't build it (1)

dyingtolive (1393037) | about 10 months ago | (#45493477)

Yeah, I'm with you. I was going for funny. I think I did it wrong. :(

Re:You don't build it (1)

mysidia (191772) | about 10 months ago | (#45497549)

The cloud is fine and dandy until Microsoft Azure is unreachable for several hours ... again ...

The cloud does not mean Azure.

The cloud means something like Rackspace, Softlayer, Slicehost, Linode, BuyVM, DigitalOcean.

There are plenty of other hosting providers that don't have a 4 hour outage every 3 months.

Re:You don't build it (-1)

Anonymous Coward | about 10 months ago | (#45492299)

10 years ago it would have been the eCloud Service ...and there'd be that foosball table with the bent rods from the bubble days that no one plays with and only 2 or 3 Starbucks in a 1 block radius

Why 10 years ago you'd get laughed out of town for selling a crappy hamburger at $25, but you'd really be interested in the special features on the latest DVD

Re:You don't build it (1)

sleekware (1109351) | about 10 months ago | (#45492699)

You can steer these types into convincing them that it may be beneficial to do the up and coming thing of have your "own" cloud or private cloud aka to us a classic onsite data-center, if appropriate to your I.T. situation.

Re:You don't build it (1)

ObsessiveMathsFreak (773371) | about 10 months ago | (#45496749)

Don't forget to buy new iPads for all your employees as well so that they can get more work done!

Re:You don't build it (1)

antdude (79039) | about 10 months ago | (#45504601)

From what/where/whom?

expect to allow intrusive oversight (2, Insightful)

Anonymous Coward | about 10 months ago | (#45492141)

not much else has changed

Re:expect to allow intrusive oversight (0)

Anonymous Coward | about 10 months ago | (#45492249)

Just get a couple of these [youtube.com] on your system and they'll take care of everything.

There are limitations (1)

cold fjord (826450) | about 10 months ago | (#45492171)

Most enterprises rely upon one or more software packages from a vendor, often for critical functions. You can only do what your vendor's software allows. Not everything is tablet friendly or cloud happy.

Re:There are limitations (1)

neo-mkrey (948389) | about 10 months ago | (#45492889)

or is it tablet happy and cloud friendly?

HIPAA Privacy Rules (1)

PPH (736903) | about 10 months ago | (#45492257)

I believe these came into effect about 10 years ago. So aside from all the advances in "the cloud", I'd ask whether that would be secure enough. I mean not just of a bunch of Slashdotters. Ask the potential cloud providers if they are HIPAA compliant and can provide documentation to that effect.

Use GMail for transferring medical records and I'll guarantee you'll be swamped with ads for everything from Vi@gr@ to funeral services.

Re:HIPAA Privacy Rules (2)

Joehonkie (665142) | about 10 months ago | (#45492421)

If only there were some way to look this up:

  • http://aws.amazon.com/compliance/#hipaa
  • https://support.google.com/a/answer/3407054?hl=en

Re:HIPAA Privacy Rules (0)

Anonymous Coward | about 10 months ago | (#45492461)

HIPAA [wikipedia.org] was signed into law in August 21, 1996.

Re:HIPAA Privacy Rules (2)

PlusFiveTroll (754249) | about 10 months ago | (#45493201)

The effective compliance date of the Privacy Rule was April 14, 2003 with a one-year extension for certain "small plans"

Or pretty much 10 years ago.

Re:HIPAA Privacy Rules (1)

mikeroySoft (1659329) | about 10 months ago | (#45493679)

VMware's new cloud is signing BAAs (Business Associate Agreements) to ensure HIPPA regulation compliance with it's customers.
press release [vmware.com]

How HIPPA works [hhs.gov]

Re:HIPAA Privacy Rules (0)

Anonymous Coward | about 10 months ago | (#45497023)

HIPAA Compliancy has insane requirements... armed guard 24-7 for example. I doubt the "fly by night" companies are actually compliant. I have had a look at some medical record systems. Server left logged in, database management tools installed, able to browse the database in clear text (except the md5 "hashed" passwords), unencrypted backups (on site and off site).

I don't understand how there is no conflict of interest when the people writing the rules are the same exact ones that benefit from the fines when someone is found to be non-compliant. There are also items within the standard that contradict each other, which means it is just a matter of who they are pointing the finger at to collect from today. No matter what you ARE in violation of HIPPA.

Re:HIPAA Privacy Rules (1)

CohibaVancouver (864662) | about 10 months ago | (#45498185)

HIPAA Compliancy has insane requirements... armed guard 24-7

HIPAA makes no mention of 'armed guards.'

In a nutshell, HIPAA states that -

1) You must protect health data, whether it is digital or in a filing cabinet

2) What the penalties are for a breach of that data.

Typing will always be better on a PC (2)

tepples (727027) | about 10 months ago | (#45492297)

The residential facilities had very modest computing needs -- entering case notes, maintaining log books, documenting medication adherence, and reviewing or updating treatment plans. It's easy to contemplate these tasks being accomplished completely from a smartphone or tablet rather than a desktop PC.

And by the time you've paired an external keyboard in order to key in all that stuff, you might as well just use a laptop PC.

In addition, some cloud solutions make dedicated desktop application suites or specific configurations unnecessary today. Browser-based options or virtual desktops have added appeal in health organizations because data is less likely to be stored locally on a device.

That'd double an organization's spending on operating system licenses because a Terminal Server CAL for Windows Server costs about as much as a retail copy of Windows for the client.

Re:Typing will always be better on a PC (1, Insightful)

UnknowingFool (672806) | about 10 months ago | (#45492431)

And by the time you've paired an external keyboard in order to key in all that stuff, you might as well just use a laptop PC.

And when all you have is a hammer, everything looks like a nail. Seriously, I don't use a tablet for my work functions, but I use a smartphone to get my emails on the road. But I am not everybody; I have different needs. Sometimes a laptop isn't the answer for everyone.

That'd double an organization's spending on operating system licenses because a Terminal Server CAL for Windows Server costs about as much as a retail copy of Windows for the client

First of all, who says that the organization requires Terminal Server to use a cloud based system? Also, "browser-based" means that the solution can be OS agnostic. For example, SalesForce. In fact, some people might have these things called Macs or maybe *gasp* a Linux machine. Lastly, are you aware that companies can negotiate with MS on enterprise licensing. Not every company pays full retail price for everything.

Re:Typing will always be better on a PC (0)

Anonymous Coward | about 10 months ago | (#45495851)

And by the time you've paired an external keyboard in order to key in all that stuff, you might as well just use a laptop PC.

Yeah, pairing a Bluetooth keyboard with an iPad or a Nexus takes FOREVER. It's not like the connection is on in moments from a cold start, and remembered until/unless you break the pairing. You might as well break out that laptop, wait 2-3 minutes for it to boot up into a usable state, and keep on doing things that way.

In addition, some cloud solutions make dedicated desktop application suites or specific configurations unnecessary today. Browser-based options or virtual desktops have added appeal in health organizations because data is less likely to be stored locally on a device.

Oh yeah, because the only solution for VDI is windows.

Four seconds and my laptop is out of sleep (1)

tepples (727027) | about 10 months ago | (#45496029)

Yeah, pairing a Bluetooth keyboard with an iPad or a Nexus takes FOREVER. It's not like the connection is on in moments from a cold start, and remembered until/unless you break the pairing.

Sarcasm detected. But the fact is, when I have tried using a keyboard with one tablet and then another tablet, it broke the pairing. And if you use a ZAGGkeys Flex or any of several other brands of Bluetooth keyboard with unrooted Android 4.3, it'll pair but you won't be able to type because certain Broadcom chipsets are misrecognized as "nonalphabetic keyboards", that is, gamepads.

You might as well break out that laptop, wait 2-3 minutes for it to boot up into a usable state

When I open my laptop's lid, it takes all of four seconds to come out of sleep and get the unlock prompt up. Dell Inspiron mini 1012 running Xubuntu 12.04 LTS.

Oh yeah, because the only solution for VDI is windows.

It is if the applications on which your company relies are Windows applications known to fail in Wine.

Android 4.4 fixes the keyboard bug (1)

tepples (727027) | about 10 months ago | (#45522239)

And if you use a ZAGGkeys Flex or any of several other brands of Bluetooth keyboard with unrooted Android 4.3, it'll pair but you won't be able to type

Android 4.4 fixes this. I tried it on my own Nexus 7.

Not much difference (1)

DogDude (805747) | about 10 months ago | (#45492315)

Not much difference, really. We're using the same OS. We're using the same hardware, usually, and whatever we need to purchase is absurdly cheap (cheaper than it was 10 years ago). We rely on the Internet as much as we did then: it's important, but not mission-critical (because it's unreliable). Our industry-specific applications still suck. Networking is identical, but a bit faster.

Re:Not much difference (0)

Anonymous Coward | about 10 months ago | (#45492587)

how can you say that there's not much difference, Linux now has Flash for watching videos online!!!!!!!

Re:Not much difference (5, Informative)

mlts (1038732) | about 10 months ago | (#45492773)

In 2003, Sarbanes-Oxley was passed, forcing companies to have to buy SANs just to stick E-mail for long term storage/archiving.

For the most part, things have been fairly static, except with new buzzwords and somewhat new concepts. A few things that have changed:

1: Converged SAN fabric. Rather than have a FC switch and a network switch, people are moving to FCoE or just going back to tried and true iSCSI which doesn't require one to fuss around with zoning and such.

2: Deduplication. We had VMs in '03, but now, whole infrastructures use that, so having disk images on a partition where only one image is stored and only diffs are stored for other machines saves a lot of space.

3: RAID 6 becomes necessary. I/O hasn't gone up as much as other things, so the time it takes to rebuild a blown disk is pretty big. So, RAID 6 becomes a must so degraded volumes rebuild.

4: People stop using tape and go with replication and more piles of hard disks for archiving. Loosely coupled SAN storage in a hot recovery center becomes a common practice to ensure SAN data is backed up... or at least accessible.

5: VMs use SAN snapshots for virus scanning. A rootkit can hide in memory, but any footprints on the disk will be found by the SAN controller running AV software and can be automatically rolled back.

6: We went from E-mailed Trojans, macro viruses, and attacks on firewalls and unprotected machines to having the Web browser being the main point of attack for malware intrusion. It has been stated on /. that ad servers have become instrumental in widespread infections.

7: The average desktop computer finally has separate user/admin access contexts. Before Vista, this was one and the same in Windows, allowing something to pwn a box quite easily.

8: The OS now has additional safeguards in place, be it SELinux, Window's Low security tokens, or otherwise. This way, something taking over a Web browser may not be able to seize a user's access context as easily.

9: BYOD has become an issue. Ten years ago, people fawned over RAZR-type devices and an IT person had a Bat Belt of devices, be it the digital camera, MP3 player, the PDA, the pager, the cellphone, and the Blackberry for messaging. Around -05, Windows Mobile merged all of this into one device, and '07 brought us the iPhone which made the masses desire one device, not a belt full.

10: Tablets went from embedded devices to on desktops and big media consumption items.

11: Music piracy was rampant, so one threat was people adding unexpected "functionality" to DMZ servers by having them run P2P functionality (AudioGalaxy, eMule, etc.)

12: We did not have to have a Windows activation infrastructure and fabric in place, where machines had to have some internal access to a KMS box to keep running. XP and Windows Server 2003 had volume editions which once handed a key would update and were happy for good.

13: UNIX sendmail was often used for mail before virtually everyone switched over wholesale to Exchange.

14: Hard disk encryption was fairly rare. You had to find a utility like SafeBoot or use loopback encrypted partitions on the Linux side for data protection. This was after the NGTCB/Palladium fiasco, so TPM chips were not mainstream.

15: One still bought discrete hardware for hosts, because VMs were present for devs, but not really "earned their bones" in production. So, you would see plenty of 2-3U racks with SCSI drives in them for drive arrays.

Things that have stayed the same, ironically enough:

1: Bandwidth on the WAN. The big changes came and went after initial offerings of cable and DSL. After that, bandwidth costs pretty much have not changed, except for more fees added.

2: Physical security. Other than the HID card and maybe the guard at the desk, data center physical security has not changed much. Some places might offer a fingerprint or iris scanner, but nothing new there that wasn't around in 2003. Only major difference in physical security is Assa-Abloy's CLIQ system which allows for both a mechanical pick-resistant key, and an electronic access mechanism, making it easy to remove a key's access from locks.

3: Lack of interest in computer security. After all the breaches in the past decade, and the fact that companies suffer little to nothing after six months to a year, security still is a low priority in a lot of companies, since it is viewed as having no ROI. Was true in '03, still is true today.

Re:Not much difference (0)

Anonymous Coward | about 10 months ago | (#45493563)

Damn, this is surely the most informative comment that I've read on slashdot in the past year.

I'd give you reddit gold if I could. Good job!

Re:Not much difference (2)

CAIMLAS (41445) | about 10 months ago | (#45493805)

Another big difference which relates to the list you mentioned: almost nobody runs their own in-house mail anymore. It's too expensive (in time and experience, mostly) to maintain efficiently and effectively, in no small part due to spam. Even larger organizations have decided it's not worth the headache.

If there is in-house hosting of mail, it's due to complex requirements and the headache that migration would be to another system. Many of these have also put in place either Google or Microsoft frontend filtering to their mail systems.

Re:Not much difference (2)

hairyfish (1653411) | about 10 months ago | (#45495319)

almost nobody runs their own in-house mail anymore.

My experience is different from yours. I work for an IT service consultancy and we're trying to push a lot of customers to cloud based email but they're all sticking to their guns. No-one around here likes the cloud for key business functions, and the NSA press is keeping them firmly entrenched in their views. For most companies (less than 1000 users) Exchange is trivial to setup and maintain, and can be supported part-time or by outsourced support. Over 1000 users then you have a big enough IT team to look after it properly.

Re:Not much difference (1)

mlts (1038732) | about 10 months ago | (#45495503)

My experience mirrors yours.

Even the PHBs want to keep company E-mail in-house for fear that a provider could use their personal communications stored for 7 years due to SOX rules against them later on.

I've seen some places tend to have their top brass on an in-house Exchange system, while lower levels might end up on Azure or a cloud provider.

Exchange is pretty easy to get up and running, especially if AD is in place. It has decent anti-spam filters that you can turn on out of the box for the edge server, and works fairly decently. Only real third party program needed would be something to ensure mailboxes are backed up regularly and possibly archived offline due to regulations and e-Discovery laws.

Re:Not much difference (1)

Nivag064 (904744) | about 10 months ago | (#45497875)

One minor problem, Exchange requires Microsoft...

Re:Not much difference (1)

hairyfish (1653411) | about 10 months ago | (#45522199)

That problem is religious, not technical.

Mail is mostly inhouse in big US orgs (0)

Anonymous Coward | about 10 months ago | (#45496075)

almost nobody runs their own in-house mail anymore. It's too expensive (in time and experience, mostly) to maintain efficiently and effectively, in no small part due to spam. Even larger organizations have decided it's not worth the headache.

Wow, never heard that before. In the USA everybody just drops in an Ironport or two and a couple Exchange servers and calls it a day. Exchange is crap, but the ironport makes it viable and end-users have been psychologically conditioned to want Outlook (also crap) so they pay big $$ for poor performing software and back it up with dead cheap awesome hardware, and end up with something any typical 13-year-old can admin after a three-week training course.

Re:Not much difference (2)

cusco (717999) | about 10 months ago | (#45494901)

I work in physical security, so will mention some changes that your site may not have implemented but which many larger sites have.

1) Granularity of access - Formerly if you had an access card it got you into the data center and from there you had free range. Today the data center is (or should be) compartmentalized and access to each area dependent on need.

2) Rack Access - There are now several brands of hardware that control technicians' access to individual racks, including front and/or rear rack door.

3) Video Monitoring - Data centers are now full of cameras, often linked to readers or door contacts on individual racks (especially Global Payment System racks).

4) Facility Monitoring - Temperature, power status, UPS state, water sensors, smoke detectors, etc. all come into the alarm system, where they are monitored by guard staff.

5) Computing Pods - Access to container-based computing centers has not only changed power and cooling management but access control as well.

6) Key Tracking - Systems like Traka and Keywatcher can be integrated into the access control system so that hard keys for individual racks/room/pods/equipment can be checked in and out by strict controls.

7) Procedures - Data center staff now have (or should have) documented procedures to follow to grant/allow access, generate/revoke cards, tracking and automated expiration for temporary access cards, etc.

Re:Not much difference (0)

Anonymous Coward | about 10 months ago | (#45495779)

Granularity for sure, last data center I visited was: 5 badge swipes + x-ray scan of belongings + metal detector + fingerprint scan = just to get into the lobby, then another 2 badge swipes + 2 fingerprint scans to get into the server rooms. If you tailgate through a single doorway you get locked in that area and have to use a phone to call security and then be escorted back to where you didn't badge to make sure you do so. Everything automatically logged and reviewed by the security response team with follow-up by management if you have any failures to badge correctly.

Once inside the server rooms there wasn't much else, all the racks were unlocked which at least was a step up from my last company that took the doors off completely. Which itself was a step up from teams running their own servers tucked under "Bill's" desk.

Same now as it was back then . . . (4, Funny)

mmell (832646) | about 10 months ago | (#45492373)

FIrst, consult the stars to ensure that the project will be done at the right time. Then, after arranging the entrails of a rooster in a circle under the full moon cast the bones into the pit and invoke the augury which will allow me to see the hardware, software stack, network stack and end-user facilities all magically "come together".

Really - I'm pretty sure my boss in the Midwest thought that was how I did it. Why would I change success?

Re:Same now as it was back then . . . (4, Insightful)

Anonymous Coward | about 10 months ago | (#45492537)

That's good, but reality is more like...

Determine the deadline, if at all possible, don't consult anyone with experience building infrastructure.

Force committal to the deadline, preferably with hints of performance review impact.

Ensure purchasing compliance via your internal systems, which minimally take up 30% to 40% of the remaining deadline.

Leave the equipment locked in a storage room for a week, just to make sure. Or, have an overworked department be responsible for "moving" it, that's about a week anyway.

Put enormous amounts of pressure on the workers once the equipment arrives. Get your money's work, make them sweat.

When it's obvious they can't achieve a working solution in 30% (due to other blockers) of the allotted time, slip the schedule a month three days before the due date; because, it isn't really needed until six months from now.

That's how it is done today. No wonder people want to rush to the cloud.

Re:Same now as it was back then . . . (1)

riis138 (3020505) | about 10 months ago | (#45493093)

This is industry standard best practice I thought.

Nothing's changed (0)

Anonymous Coward | about 10 months ago | (#45492401)

Management thinks they can save money now, and don't believe us when we tell them how much money they'll save investing in $time-saving-technology. We still use multiplexed T1 lines for Internet access despite, say, fiber and coax having been available for decades now.

The only thing you can say that's improved over the years is an insistence on provisioning remote access capabilities. Even if you can't work from home.

actualy 10 years ago (2)

mjwalshe (1680392) | about 10 months ago | (#45492411)

You'd be doing what we do now except maybe some types of networks that use leaf and spine rather than a tree design.

Well.... Quite a bit has happened. (3, Interesting)

Chrisje (471362) | about 10 months ago | (#45492459)

We've consolidate all office application servers to 5 data centers, one per continent. Then we've rolled out end-point backup for some 80.000 laptops in the field and some 150.000 more PC's around offices across the world which includes legal hold capabilities. Each country in which we're active has a number of mobile device options for telephony, most of them being Android and Win8 based nowadays since WebOS got killed.

Then we're in the process of building a European infrastructure where we have data centers for managed customer environments in every major market in Europe. I am currently not aware of what's going on in APJ or South America. This is important in Europe however, because managed European customers don't want to see their data end up in the States, and the same goes for those that use our cloud offerings.

physical local IT staff presence in all countries has been minimized to a skeleton crew, not only because of data center consolidation but also because of the formation of a global IT helpdesk in low cost countries, and the rise of self-service portals.

The plethora of databases we had internally has been Archived using Application Information Optimizer for structured data archiving. We are our own biggest reference customer in this regard. On top of that we've beefed up our VPN access portals across the world so as to accommodate road warriors logging in from diverse locations.

Lastly, we use our own Records Management software suite to generate 8.000.000. unique records per day. These are archived for a particular retention period (7 years I believe) for auditing purposes.

Re:Well.... Quite a bit has happened. (0)

Anonymous Coward | about 10 months ago | (#45492589)

Not sure HP is comfortable with you sharing all this information publicly.

Re:Well.... Quite a bit has happened. (0)

Anonymous Coward | about 10 months ago | (#45494479)

I am 87.3% positive this is General Electric, not HP.

Re:Well.... Quite a bit has happened. (1)

Chrisje (471362) | about 10 months ago | (#45494751)

Sorry to burst your bubble there.

I'm afraid to say I've been bleeding blue (HP Blue, not Dell, IBM, eh... never mind) since the dawn of time.

Re:Well.... Quite a bit has happened. (2)

Chrisje (471362) | about 10 months ago | (#45494707)

Yes they are. I work in the Information Management software division as a pre-sales, and I'm pretty much paid to tell subsets of the above to customers.

- We are our own reference customer for Connected backup for end-points.
- We are our own reference customer for TRIM, now known as HP Records Manager 8.0
- We are our own reference customer for Database Archiving, now known as HP Application Information Optimiser

So all of that is publicly available in white-papers and case-studies.

The fact that we're building a public cloud infrastructure per country in Europe is also very much not a secret. If we want to get or retain EU based cloud customers, we need to be able to guarantee that their data remains their data and that it won't fall prey to third parties, chiefly amongst which the US government.

In terms of data center consolidation and cost savings associated with that, the strategy internal IT is following is largely in line with the Data Center concept we sell as Converged Infrastructure, Cloud System Matrix and Cloud System One.

Moreover our external web presence is run on the newly launched project Moonshot, in which you can currently cram some 45 servers in 5U rack space, which will soon get uplifted to 180 servers in 5U rack space.

All of this is a clean cut case of eating your own cooking, and then using that fact to market the underlying technologies.

So yes, I am very much convinced HP is comfortable with me sharing this publicly.

Re:Well.... Quite a bit has happened. (2)

cusco (717999) | about 10 months ago | (#45493127)

In the field of physical security, I've seen customers with 10 independent access control systems scattered around their various facilities condense into a single centralized and monitored system. Access control system panels used to be connected serially to a "server" which was a cast-off desktop PC shoved under a janitor's desk, but now are actual servers in server rooms, monitored and backed up by IT staff, communicating with panels that might be on the other side of the planet.

Security video was analog cameras connected with R-59 coax cable and plugged directly into a DVR or (gods help us) a VCR. The only way to view live or recorded video was to go to the site where the cameras were physically located, and with many systems the act of viewing recorded video would stop the system from recording until you were done. Recording capacity was measured in days or sometimes hours, and casinos had people whose jobs were to just walk from one VCR to another changing tapes all day. Failures to record were more common than actually capturing an incident. Today's IP cameras record to NVRs that have terabytes of capacity in RAID 10 arrays, often with redundant servers, sometimes recording across the WAN or across the Internet to data centers in other countries.

Integration between card readers, alarm points, monitoring points and cameras are common today. A motion detector set off in a data closet in Mumbai may raise an alarm and pop up video on a guard's computer in Dublin while sending an email with a video clip to smart phones in Los Angles and Sydney. The guard may dispatch local staff in Mumbai on his handheld phone across the IP telephony system, and they may reply on their walkie-talkie. Access to the site might be granted by staff in the SOC in Phoenix, and repair crews may be dispatched by the Facilities department in Houston.

In the Physical Security industry the future is now, and it's exciting.

You can't make this shit up (-1)

Anonymous Coward | about 10 months ago | (#45492479)

Really, this is weapons grade idiocy, truly epic.

http://globalgrind.com/2013/11/21/all-the-fake-democrats-please-take-a-seat-by-russell-simmons-blog/
millions of lives
"Yes, we initially wanted single payer, and we had to compromise back in 2009 for the Affordable Care Act. But, it is a damn good piece of legislation that has already saved hundreds of thousands, if not millions of lives." Didja catch that peeps? "millions of lives" indeed.

First of all, I truly don't know who this idiot is, I assume he is some brain damaged rap artist or something like that.

"hundreds of thousands, if not millions of lives"

This is what Democrats really think. Facts have no relevance or meaning in their lives. They *want* millions to be saved by Obama, and that's all that is really needed, the *fact* that they want it to happen is in itself an end. They want to do good, they say they are doing good, good, therefore must be happening.

In many ways it is the public school system, teachers unions and the media that is responsible for this. This nonsense cannot happen if we have an intelligent capable and moral citizenry; we clearly do not.

Oh and it get's better;

"I know where Republicans stand. They have voted 47 times to repeal the Affordable Care Act, so their stance is clear. If you get sick, you’re on your own. If you can’t pay for your medical expenses, declare bankruptcy. If you have a pre-existing condition, they’ll send you a get well card when you’re on your death-bed."

All of these things are lies. Lies that cannot be substiantiated in any way - but that doesn't matter because no one will call him on this dishonesty - no one will hold this man, or any of them to account. Republicans are the go to bad guys, the Reapers of Firefly, the Indians of your cowboys-n-indians movies, the Jews of Nazi Gernamy, the Kulaks of the USSR. You have to have a class of bad-guys to blame and distract the masses with so they do not focus their attention on the truly evil ones, the directors of this madness. The Democrats.

If you haven't read We The Living by Ayn Rand I strongly suggest it - it's a short read and worth your time. Gwan people, think outside the box for once, don't take your dogma to be gospel, challenge the authorities for a change! Think for yourself. Note I am not suggesting you blindly follow my thinking, I am honestly telling you to do your own homework, make your own decisions, challenge your own assumptions! "Question Authority" used to be the anti-establishment meme of the truly hip and cool crowd was it not? What happened to you people?

Because I can tell you for a fucking fact, Obamacare hasn't saved no millions of lives, you can be dammned sure of that. So think people, use the brains god gave you and go and find out how many lives the lightbringer has in fact saved. You will be surprised.

Re:You can't make this shit up (0)

Anonymous Coward | about 10 months ago | (#45496063)

the Reapers of Firefly

Nerd card revoked. It's Reavers. ReaVers.

Virtualization (4, Insightful)

Jawnn (445279) | about 10 months ago | (#45492557)

For good or bad (and yes, there's some of both), virtualization is the single biggest change. It is central to our infrastructure. It drives many, if not most, of our other infrastructure design decisions. I could write paragraphs on the importance of integration and interoperability when it comes to (for example) storage or networking, but let it suffice to say that it is a markedly different landscape than that of 2003.

Re:Virtualization (0)

Anonymous Coward | about 10 months ago | (#45492875)

Yes, virtualization is by far the biggest change. The improved h/w utilization is great.

As far as clouds I would not touch it with a ten foot pole for anything sensitive or valuable. Cute pictures of pets and the like but never for valuable business data, and for what? Not needing to set up a remote VPN?!? Nah, most companies data is more valuable than that.

Improved network speeds at lower prices is another change.

Better VoIP solutions available.

Blade servers have come a long way and is in more use at lower prices and better utilization.

Re:Virtualization (2)

riis138 (3020505) | about 10 months ago | (#45493159)

I have sat in many meetings with a focus on cloud storage and it makes me cringe when I think about sensitive business data flying around. The evolution of byod has made encryption programs like pgp and mobile iron a must.

Re:Virtualization (1)

Chrisje (471362) | about 10 months ago | (#45494865)

Information security and adhering to all manner of certification, both in terms of physical security and compliance to information management regulation, is usually a lot more stringent in a decent (professional) cloud environment than in people's own data center.

I'd be inclined to disagree with your assessment of hosted infrastructure, although quite honestly I am apprehensive about going to the cloud myself.

Maybe it's a psychological thing.

Re:Virtualization (1)

maxwell demon (590494) | about 10 months ago | (#45512679)

Yes, virtualization is by far the biggest change. The improved h/w utilization is great.

How does adding a virtual machine (and another OS copy) in between the OS and the server program improve hardware utilization (unless you're a hosting company that has to give access to several unrelated entities while protecting them from each other, of course)?

I mean, it certainly improves flexibility. But I don't see how it improves hardware utilization.

Re:Virtualization (1)

Jawnn (445279) | about 10 months ago | (#45539653)

How does adding a virtual machine (and another OS copy) in between the OS and the server program improve hardware utilization?

Ummm.... Because most servers running natively on dedicated hardware are coasting most of the time? You don't really understand virtualization, do you.

Re:Virtualization (1)

maxwell demon (590494) | about 10 months ago | (#45546805)

What stops you from installing several servers on the same machine without a virtualization layer in between?

Re:Virtualization (2)

riis138 (3020505) | about 10 months ago | (#45493133)

The evolution of package management and group policy have made my job much easier. I don't miss the days of of going up and down the rows of desks, popping disks into boxes.

Re:Virtualization (2)

GodfatherofSoul (174979) | about 10 months ago | (#45493261)

Amen to this. I'd say it's the single most important change for network admins in the past 15 years. Our server farm went from a 7 foot stack of pizza boxes with disparate hardware and OSs that we were paying oodles to be parked in a server farm; to one public VM host on the cloud and one private VM host running on my boss's desktop.

Maroon. (-1)

Anonymous Coward | about 10 months ago | (#45492567)

The writer of the article looks and reads like a character from the Lake Tardicaca episode of Southpark. What a fucking maroon.

Re:Maroon. (1)

fredrated (639554) | about 10 months ago | (#45492663)

What a fucking maroon.

I don't quite understand your post. Do you the writer is dark brownish-red, or do you think they were abandoned on a desolate island?

Re:Maroon. (1)

cusco (717999) | about 10 months ago | (#45495117)

He thinks the writer is in a Bugs Bunny cartoon.

Re:Maroon. (1)

thsths (31372) | about 10 months ago | (#45493909)

I think the writing is actually ok, but the web site is certainly abysmal.

A cloud is for fools and lazy people. (0)

Anonymous Coward | about 10 months ago | (#45492669)

Smart admins or individuals avoid the use servers which are
under control of a third party for any data which is in any way
sensitive or important.

There are simply too many things which can go wrong, and not
all those things will be accidental.

Well... (1)

Anonymous Coward | about 10 months ago | (#45492869)

Virtualization and Backups: These go hand in hand. Virtualize then backup a server, if the hardware implodes run it on a toaster oven. This allows people to be more promiscuous with consumer grade hardware for three 9's applications, and thus enables you to deploy more stuff given the software licensing expense is not full-on insane.

PC Miniaturization: Where you used to buy a purpose built box you can now buy a PC to do the same thing e.g. PBX, Video Conferencing, Security Camera's, Access Card system, etc. Also, now people want to access that gear through mobile devices on short and long range wireless radio networks which, due to the hardware limitations, has given a brand new life to the mainframe computing model.

Stability Things are a lot more stable now than they were in the 2000's. Remember migrating off of win98? There's a lot less buggy code sitting around to deal with.

Scaling is now stable. To setup 100 PC's with Windows 2000 is nothing like doing Windows 7. Also you can abuse the hell out of remote-app and similar server systems to improve app performance and scalability. Ever try to run Dynamics AX across a WAN Link? Yeah.

Monitoring. Has become a lot more granular. A lotmore granular.....

Security. The game has gone from installing a firewall and laughing at the virus writers to trying to figure out which GPOL combinations will stop Cryptolocker from trashing your file-shares through an undocumented IE exploit.

Re:Well... (1)

AK Marc (707885) | about 10 months ago | (#45493135)

Remember migrating off of win98?

Nope. IT departments were migrating from NT 3.51 to 2000. Home users were migrating from 98 to whatever you are implying (98SE being next, ME after that, and many waiting for XP). The move from 2000 to XP was easy. XP is what 2000 was supposed to be, so the fundamental differences between 2000 and XP were small, the real difference was that XP worked.

Re:Well... (1)

Anonymous Coward | about 10 months ago | (#45494531)

The only big difference was XP allowed DirectX 9. Windows 2000 always worked. Windows XP IS Windows 2000. You are too young to have fully experienced Windows 2000, Mr. 707885. Oh wait, you experienced it, but because it didn't run your games you poo-pooed it. For getting shit done, Windows 2000 is closer to Windows 7 than Windows XP will ever be.

Re:Well... (3, Insightful)

AK Marc (707885) | about 10 months ago | (#45494917)

2000 managed all sorts of problems with hardware. Drivers lagged, so USB support was crap. Blue screens for plugging in a USB device wasn't just saved for press conferences. 2000 was good so long as all you did was Office. For the marketing department, they all went back to macs. Where they had a variety of monitor sizes and commercial editing packages that Just Worked. Ah, making fun of my slashdot number, when you don't even have one. 2000 was "supposed to be" the first converged OS (95/NT), but failed because it wasn't home-user friendly (not just games). XP managed it, and was really an SP of 2000, but with new OS name, pricing, and marketing.

Re:Well... (1)

cusco (717999) | about 10 months ago | (#45495157)

Hardware is a frack of a lot more stable now too. When was the last time you had a video card or a NIC flake out? In a 900-desktop environment that used to be a daily occurrence.

Re:Well... (1)

Culture20 (968837) | about 10 months ago | (#45496119)

Scaling is now stable. To setup 100 PC's with Windows 2000 is nothing like doing Windows 7.

Setting up 100 PCs with Windows 2000 was extremely easy. Windows 7 has become much harder because your can't edit the default user registry hive without Windows 7 freaking out. Microsoft still needs a good counterpart to /etc/skel/

Company in the 500-1000 employee count tier (1)

Anonymous Coward | about 10 months ago | (#45493071)

We have divisions world-wide, but our Corporate/HQ division is located in America and consists of roughly 500 employees. At home, we have three facilities at different locations.

- The entire computer system is virtualized through VMware using VDIs with help from VMware View, and hosted at a major [unnamed] datacenter in Texas on a private network setup for our Company. We also have an identical setup at an Asian datacenter under the same provider, and both datacenters are linked together through VPN from the core router gateway (not in our control or access)
- The network infrastructure is setup as a Class C 172.x.x.x
- Each facility has a 100mbit direct fiber hookup.
- Each facility has Cisco switches, and a Cisco router that establishes a VPN connection to the regional datacenter (Texas or Asia depending on which continent of the world the division is located in)
- Facilities are equipped with all-in-one zero clients that connect to the VMware View Connection server that runs as a virtual machine at the Texas/Asian datacenter through the LAN via VPN by Cisco router at facility

The virtual switch in VMware vSphere/ESXi, unfortunately, I have no information on as I was not involved in that. But as far as that goes, I believe it is mostly a matter of what is most efficient in connecting our VDIs to the production servers that host the ERP software. We have a Microsoft SQL server and several servers that support the ERP software we use. The VDIs have an application client locally installed that connect to one of the handful of application servers. Some things to keep in mind is PCI compliance (credit cards, etcetera) and security, minimizing unnecessary traffic from reaching the production servers / improving bandwidth capacity between production servers and the entire client-base (VDIs). There is one or two servers for ERP that are print servers, but they are not the servers that users connect to to add and install printers into their VDI (there is another server just for this).

Then there is also a file server, a couple of load-balanced/clustered Exchange servers, some Barracuda Web Filter & Backup appliances, and a Riverbed optimizer appliance. There are over a dozen hypervisors each maxed out with 256GB of RAM, two Xeon 12-core CPUs, a Teradici PCoIP card, and some network cards.

I hope this is educating, informational, and interesting to readers. :)

Re:Company in the 500-1000 employee count tier (1)

PlusFiveTroll (754249) | about 10 months ago | (#45493319)

>The network infrastructure is setup as a Class C 172.x.x.x

You mean Class B, or specifically the 172.16/12 private network. It may be further subnetted via CIDR, but only having 256 IPs (Class C) doesn't work well in most enterprise settings.

Re:Company in the 500-1000 employee count tier (0)

Anonymous Coward | about 10 months ago | (#45494061)

>The network infrastructure is setup as a Class C 172.x.x.x

You mean Class B, or specifically the 172.16/12 private network. It may be further subnetted via CIDR, but only having 256 IPs (Class C) doesn't work well in most enterprise settings.

Ah sorry, I don't know enough about those things. It is 172.16., but my supervisor says Class C. Our Asian region is 172.18.x.x, while our Western region is 172.16.x.x. The third octect is segmented into a handful of categories, each category with 512 IPs. One category is for servers/switches/related other, one for wireless (each facility has Cisco WAPs + Cisco wireless controller) DHCP clients, one for wired DHCP clients, and one for printers.

Expectations (0)

Anonymous Coward | about 10 months ago | (#45493139)

Try to control the expectations. Do not let the deadline date become the goal. When setting the roll-out / cut-over date give yourself a couple extra weeks / months if at all possible. If your ready to roll early, great. Remember.... Users / executives remember failures not the thousands of things that went right

Mid 90s (0)

Anonymous Coward | about 10 months ago | (#45493155)

In the mid 90s I was deploying Novell NetWare SFT III servers. Paired servers with fiber backbone cards and shared storage as A+P "clusters" (didn't use that word then), In the event the active node went down the other one would instantly (5MS) delay. Now I do application security but still work in very large data centers and no one is doing that level of availability,

Re:Mid 90s (0)

Anonymous Coward | about 10 months ago | (#45494597)

...and Exchange can't even do the things Groupwise did in the 90s. Think about that...20 years to redevelop the same fucking thing.

abstraction (3, Insightful)

CAIMLAS (41445) | about 10 months ago | (#45493701)

The biggest difference in the past 10 years is that everything has been abstracted and there's less time spent dealing with trivial, repetitive things for deployments and upkeep. We support more users now, per administrator, than we did back then by many a massive amplitude.

No more clickclickclick for various installations on Windows, for instance. No more janky bullshit to have to deal with for proprietary RAID controllers and lengthy offline resilvers. These things have been abstracted in the name of efficiency and the build requirements of cloud/cluster/virtualization/hosting environments.

We also have a lot more shit to take care of than we did a decade ago. Many of the same systems running 10 years ago are still running - except they've been upgraded and virtualized.

Instead of many standalone systems, most (good) environments at least have a modicum of proper capacity and scaling engineering that's taken place. Equipment is more reliable, and as such, there's more acceptable cyclomatic complexity allowed: we have complex SAN systems and clustered virtualization systems on which many of these legacy applications sit, as well as many others.

This also makes our actual problems much more difficult to solve, such as those relating to performance. There are fewer errors but more vague symptoms. We can't just be familiar with performance in a certain context, we have to know how the whole ecosystem will interact when changing timing on a single ethernet device.

Unfortunately, most people are neither broad or deep enough to handle this kind of sysadmin work, so much of the 'hard work' gets done by support vendors. This is in no small part due to in-house IT staffing budgets being marginal compared to what they were a decade ago, with fewer people at lower overall skill levels. Chances are that the majority of the people doing the work today are the same ones who did it a decade ago, in many locations, simply due to the burden of spinning up to the level required to get the work done. In other places, environments simply limp by simply on the veracity of many cheap systems being able to be thrown at a complex problem, overpowering it with processing and storage which was almost unheard of even 5 years ago.

The most obnoxious thing which has NOT changed in the past decade is obscenely long boot times. Do I really need to wait 20 minutes still for a system to POST sufficiently to get to my bootloader? Really, IBM, REALLY?!

Re:abstraction (1)

Gothmolly (148874) | about 10 months ago | (#45495403)

Instead of many standalone systems, most (good) environments at least have a modicum of proper capacity and scaling engineering that's taken place.

Except that has nothing to do with what year it is.

Re:abstraction (1)

jon3k (691256) | about 10 months ago | (#45500707)

The most obnoxious thing which has NOT changed in the past decade is obscenely long boot times. Do I really need to wait 20 minutes still for a system to POST sufficiently to get to my bootloader? Really, IBM, REALLY?!

With virtualization it's very rare for me to have to reboot a physical host, and guests reboot in a couple of seconds. So overall that situation seems to have improved dramatically. In my environment, at least.

android or ios? (0)

Anonymous Coward | about 10 months ago | (#45494435)

for medical records management? fuck no.

no way those are compliant, out of the box, with hipaa and other privacy and data security requirements... and neither really offers true device and software management needed for compliance or this sort of application, or for 'enterprises' in general. no fucking way would i allow medical records access on those privacy-sucking, user tracking, data-compiling whores.

surface tablet with win8 pro or a generic one with linux.. with either software choice properly installed and administered..... or a desktop/laptop with same.

___

connectivity is the easy part... ordinary broadband connection with dual lan.. one for client, one for staff... staff devices and lan are ONLY for work. client lan for clients or staff's personal devices for personal use. no work stuff thru the client lan, no personal stuff on the staff lan.. no device ever connects to both, even if only one at a time. staff connects to their corporate data store or cloud using vpn or other secure connection (in tfs, perhaps is browser-based https)

without (1)

holophrastic (221104) | about 10 months ago | (#45494475)

"it's easy to contemplate these tasks being accomplished . . ." without security, without reliability, without stability, without privacy, without confidentiality, without accountability, without redundancy.

If I were to do that, I'd be in breach of at least half of my NDAs, and a few of my SLAs.

Biggest change-Outsource everything! (1)

Anonymous Coward | about 10 months ago | (#45494529)

The biggest change has been in management, who are now trained to outsource anything and everything. Their answer to every question is to outsource it. If an organization has developed internal expertise in some in-depth area, the management will outsource whatever it is, even if they throw away the expertise in the process. And they'll probably fire the employees with the now-useless expertise and give themselves bigger bonuses. So the move to the "cloud" is not being driven by technical people, it's driven by management who gets loss-leading Azure numbers from a sales drone and wants to dismantle their infrastructure to save a few bucks. Some day, dismantling their infrastructure and firing the employees will come back to haunt these shells of what were companies. All a "company" is now is a bunch of managers giving themselves bonuses, and paying outsourcers. There's nothing left of the company any longer, and these shells will eventually collapse.

Ten years? Bah, humbug. (2)

Slartibartfast (3395) | about 10 months ago | (#45494889)

10 years ago really wasn't that big a deal. By 2003, VPN (IPSec and OpenVPN) was fairly robust, and widely supported. PPTP was on the way out for being insecure. Internet was most everywhere, and at decent-if-not-great throughput. Go back five or ten years before *that*, and things were much more difficult: connectivity was almost always over a modem; remote offices *might* be on a BRI ISDN connection (128 kb/s), probably using some sort of on-demand technology to avoid being billed out the wazoo due to US telcos doing this bizarre, per-channel surcharge for ISDN. PPP was finally supplanting (the oh, so evil) SLIP, which made things better, assuming your OS even supported TCP/IP, which was not yet clearly the victor -- leading to multiple stacks to include MS and Novell protocols.

All in all, 2003 was about when things were finally getting pretty good. Leading up to 2000 had been a tough row to how. And let's just not even go before that -- a mishmash of TCP/IP, SNA, SAA, 3270, RS-232, VT100, completely incompatible e-mail protocols, network protocol bridges, massive routing tables for SAPpy, stupid protocols... a 100% nightmare. Very, very glad to have left those days behind.

a decade ago was 2003 (1)

Gothmolly (148874) | about 10 months ago | (#45494915)

As in, AD was mostly mature, Win2003 was out, Linux was real, and PCs were commodities. An IT infrastructure now vs _20_ years ago on the other hand would be more interesting. Not much has happened since 2003.

30 years ago (0)

Anonymous Coward | about 10 months ago | (#45495141)

30 years ago we had T1 and multidrop to 3270s on microwave backbone
20 years ago we had token ring and Decnet on fiber in addition to microwave
10 years ago we had Ethernet to thousands of PCs and servers.

NSA, anyone? (1)

BringsApples (3418089) | about 10 months ago | (#45497043)

Since the NSA has been confirmed, I feel that I am obligated to explain to everyone (I work at a corporate level with many other integrated departments) that things have changed, and nothing is secure anymore, so on the level of business buyouts, where secrecy seems to be sooo important, sending all of your email through gmail isn't a good idea anymore, as all of your data is compromised.

One could almost make a living off of selling slackware boxes running sendmail with mimedefang and spamassassin as their own email server that would be run in-house ;)

Look Brah, no wires! (1)

shoottothrill (1806688) | about 10 months ago | (#45498095)

One of the biggest changes I have seen along with some of these others that have been posted is the reduced number of wires we have to run to places. No thicknet, coax, dedicated, or even ethernet lines. WireLESS is the infrastructure and the mobility it allows is wonderful. The reduction of costs is brilliant. Thanks smart people everywhere who keep advancing our profession-this ones for you.
Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>