DARPA Successfully Demonstrates Self-Guiding Bullets
I am not US taxpayer so I don't give a shit how much such bullet costs. All I know that sometimes the SEALS or other special ops. unit serves to protect civilians. Hard to belive but that is its function. Put aside "the bad terrorists" and just focus on some scenarios in which such weapon would be extremely useful despite its cost... like I don't know... maybe it is some stupid Hollywood style example but - Maersk Alabama incident. AFAIK snipers did excellent job then and if such weapon could help in such situations I like it.
A "Never Reboot" Service For Linux
The idea is good itself but unless your OS vendor starts using it it is worthless IMHO - lets think of RHEL for example:
* it rises security issues cruicial stuff like kernel code comes from third party which party does not give any SLA or other agreement - I don't think that security guys will like that
* it rises support issues - does f.e. RH or Oracle support systems patched this way
* it (paradoxically) rises the complexity of running the systems since it involves yet another way of patch, test, deploy cycle iterations
So it is cool feature to have f.e. for home server but I won't pay 4 bucks for it. It is cool from technical standpoint. But unless the operating system vendor itself supports it is worthless from my point of view.
Also I don't see RH or Novell (SUSE) even touching this stuff - I wonder why?
A "Never Reboot" Service For Linux
> When a stock broker's trading floor system goes down, the loss is
> measured in millions of dollars per second
Ksplice does not protect you from servers going down.
> Downtime is just not acceptable under some circumstances.
Still - ksplice does not make your servers highly avialable or fault tolerant. It just allows you to patch the server without rebooting.
Any decently designed HA or FT system should have such things like service reboots implemented by design since it is natural and obvious that you will need to reboot some nodes sometimes. Usually it is reffered to as maintanance or planned downtime - it is quite other thing that an unplanned downtime or disaster recovery - ksplice does not deal with that.
A "Never Reboot" Service For Linux
I don't really personally see any use of such service. If you need FT or HA system you need to design it as such from ground up. In this case paying 4 bucks just solves some problems with rebooting after kernel upgrade. I dont have problem with that. I just reboot in next service window. In normal situation mission critical systems have some sort of redundancy not only to cope with planned service reboots but with other unplanned disasters. So usually you have a N+1 redundant cluster in which you can reboot the servers using some procedure that was worked out while DESIGNING the system. Also I see quite few security issues with patching the kernel this way. In mission critical services you usually do test everything before rolling it out to the systems so using such feature just makes things more complicated (that just simply reboot the machine with my current procedures).
I cannot find anything about security details on their webpage. They state "Ksplice Uptrack uses cryptography to authenticate the update feed.". So what? Fedora also used cryptography and once their servers got rooted the whole chain collapsed. So if I was to use their service I wish to know how exactly their security is implemented since I would be getting kernel patches (quite critical stuff) from them. At least with RHEL I know a about their security procedures (quite rigorious). From support point of view. Does f.e. Red Hat or Oracle support systems patched this way?
It is a nice feature but IMO not suitable for enterprises yet.
Microsoft To Delete Bing IP Data After 6 Months
At launch of Bing I have used it to test it and I haven't found any feature that would break my addition to Google. Even if Bing was as good as Google it is still different and requires me to learn a new tool. The only reason I would have learnt a new tool would be if it was any better - but it is not. At least in my opinion.
So my question is - does anybody even use Bing? Recently I recall that I have used Bing only when I gived the search box at MSFT KB/Support pages (which use Bing) and it just failed for simple queries like "download something-microsoftish". Google is much better even when searching MSFT sites.
Yes and I know that Google != privacy. But I can cope with that if it works OK.
Do IT Pros Abuse Their Power?
> Any admin worth their pay can run rings around a net-blocker.
What Admin? Oracle admin? AIX admin? SharePoint admin? SAP admin? There is a lot of different types of admins now and what makes them worth their pay is that they help you run your business and earn money. The ability to run rings around a net-blocker is not something you put on your resume.
Also in well implemented network it is not as easy to run around it *undetected*.
Also by doing so you are clearly breaking the rules that your supervisor set for you - what for? So they can fire you easly if they wish? Mobile broadband internet is like 10 bucks a month (at least here in Poland). Just get your own netbook or laptop and use it for unauthorized Internet access.
Are You Using SPF Records?
Where is logic in that?
- you use SPF for own domains
- your shool's Zimbra installation scores mails from your domains as spam
Based on above facts how have you come to conclusion that SPF doesn't work in general? The fact that your school's Zimbra scores your mail as spam is just a single cases and most probably not related to SPF in general.
Have you looked at headers of these message marked as spam? Have you contacted the postmaster?
Are You Using SPF Records?
Some spam filters score on SPF. So not having SPF increases chance of false positives for your legitimate mail when you don't have SPF. And since SPF is free and painless to implement (just few DNS records) I don't see any reason not to use it. Also not like it is something that much significant either.
Well if I do then SSL/TLS certificates and cryptography in general are the means to authenticate someones (or some servers) indentity.
So my question is: if sites in my intranet use proper PKI and SSL/TLS mechanisms am I still voulnerable to this flaw?
Why the CAPTCHA Approach Is Doomed
Srly - great. :)
Reasonable Hardware For Home VM Experimentation?
Well you don't clearly state what you wish to accomplish nor how much money you have so it is hard to answer. But maybe such setup will be OK.
Build yourself custom PCs.
- good and big enclosure which can fit large ammount of drives
- moderate 64bit AMD processor (really any - you will not be doing any serious processing on storage server)
- any ammount of RAM (really 1 or 2 gigs will be enough)
- mobo with good SATA AHCI support (for RAID) and NIC (any - for management) onboard
- one 1Gb PCI-* NIC with two ports
- 6x SATA2 NCQ HDD (any size you need) dedicated for working in RAID - software based (dmraid) RAID1+0 array configuration
Virtualization servers (2 or more):
- you need the virtualization servers to have the same config
- any decent enclosure you can get
- the fastest 64bit AMD processor you can get preferably tri or quad core (it will do the processing for guests) with VT extensions
- as much RAM as you can get/fit into the machine
- mobo with VT support, one (any - for management) NIC onboard
- one 1Gb PCI-* NIC with two ports
- one moderate SATA disk for local storage (you will be using it just to boot the hypervisor) or disk-on-chip module
Network switch and cables:
- any managed 1Gb switch with VLAN and EtherChannel support, HP are quite good and not as expensive as Cisco
- good CAT6 FTP patchcords
General notes for hardware:
- make sure all of the PC hardware is *well* supported by Linux since you will be using Linux :)
- if you can get better (quality wise) components, good enclosures, power supplies, drives etc. - since it is a semi server setup you don't like it to fail for some stupid reason
- make two VLANS - one for storage, other for management
- plug onboard NICs into management VLAN
- plug HBA NICs into storage VLAN
- configure ports for EtherChannel and use bonding on your machines for greater throughput
- for storage server just use Linux
- for virtualization servers use Citrix XenServer5 (it is free, has nice management options, supports shared storage and live motion) or vanilla Xen on Linux, don't bother with VMWare Server, VMware ESX and Microsoft solutions are expensive
Storage server setup:
- install any Linux distro you like (CentOS would not be a bad choice)
- use 64bit version
- use dmraid for RAID and LVM for volume management
- share your storage via iSCSI (iSCSI Enterprise Target is in my opinion best choice)
Virtualization servers setup:
- install XenServer5 (or any distro with Xen - CentOS won't be bad)
- use interface bonding
- dont use local storage for VMs - use storage network instead
Well here it is. Quite powerfull and cheap virtualization solution for you.
SoHo NAS With Good Network Throughput?
Off-shelf NAS device will be not only slow but also full of various bogus bugs with which you need to wait for vendor to issue firmware update...
Just build it yourself - build a PC. You have plenty of options:
1. If you have a rack somewher buy a low end rack 2U rack server with enclosures for SATA disks and some decent RAID controller.
2. Build yourself a PC in tower enclosure. Get some Core 2 Duo mobo (cheapest), medicore ammount of RAM - SMB and NFS and AppleTalk servers with Linux operating system will eat up something like 80MB for the system and 10MB per client computer - go figure, the rest of RAM is for I/O buffers. Stuff as much as you can get SATA disks into that (like 4x 1TB). Setup it with software RAID. And you are done with it. Probably it will be much cheaper than decent NAS box (so called SoHo boxes are no worth even looking at).
Do so and you have a decent storage that is more efficent that your network.
You said about network efficency? Well - this has nothing to do with NAS box. You can have the best performing NAS box - but if your network is weak - well here goes your efficency.
So as for network buy managable switch that can cope with Linux channel bonding - with that you can bond N ethernet channels and get network transfers somewhat lower than N*interface speed.
Vista To XP Upgrade Triples In Price, Now $150
> Honestly, I don't know what all the resistance to
> Vista is all about.
Well few hints.
1. I work in large org (50k workstations), we dont even test Vista yet. Supporting Vista with the software we have to run (custom stuff that supports our business) will be such a nightmare that I can't even imagine. Well the software barely works on XP and 2K - but still it works and works for my salary.
That point was me as IT Manager - now me as a normal person...
2. I somewhat support computers for my family (mother, father, grandparents etc.). I don't really mind doing it. Now my mother has just barely got her ways with XP so she can do stuff she needs with her laptop. Now please explain me what advantages Windows Vista will give her that will justify the need to relearn loads of basic things?
I personally use Linux and Windows 2008 Server for my computing needs but I am not the problem here. I could easly use just whatever so it makes no difference to me...
Nagios 3 Enterprise Network Monitoring
Well for me what ruled out Nagios was:
1. It is painfull to setup, don't get me wrong - I've sat my time over configuration and I think I know it a little bit and I can easly set it up for like 100 hosts with some templates +includes +sed magic. But that is what I can do. Not all of my staff can do it and it really is not easy.
2. It is not distributed. The checks can be distributed. But you cannot have like 20 child Nagios nodes managed by local staff and parent nodes that gather data from children. This is a killer feature of Zabbix for me. I can send out a standard configured box/server with Zabbix to my local staff. Give them access via LDAP/AD. And tell them to configure it so it suits *their* local setup (well we have quite uncommon/unstandardized branches - historical/political reasons). Then I can gather data from their local system (they have configured it) and process it in central place so I can have a clear overview what is going on in infrastructure. I really have no clue on how to do it with Nagios - probably it is possible with some ninja-like-hacking but it is not something (ninja-like-hacking) you like for big organization. You need a clean, managable stuff.
3. Zabbix can collect and really process historical data. If for some reason I wish to know how in past year my network bandwith evolved I can quite easly click and get some nice graphs, reports and even prognose some stuff based on various trends.
To summarize Nagios for me seems like perfect tool for sysadmin. But it is not so good for enterprise monitoring where you have quite different goals.
kosmosik has no journal entries.