ISPs 'Exaggerate the Cost of Data'
When you shop around for hosting, the price/GB can fluctuate wildly. Amazon's EC2 is almost at the top with $0.12/GB, but Cogent at $5/Mbit (~0.015/GB) is one of the cheapest for transit/paid traffic.
Even less? How about free, using peering agreements on internet exchanges? This way, providers like Hetzner can sell their bandwidth for even less, like 5-10TB included and â 6,90/TB after (â 0.0069/GB).
ISP's should just whine less and do their homework. I can understand small ISP's having trouble when leasing lines from the larger ones (article has Trimco vs BT as example), but the main problem is that the larger ISP's promote this "bandwidth is expensive" myth even harder...
Amazon EC2 Crash Caused Data Loss
And that's exactly how EBS is supposed to be backed up - it saves snapshots all the time to S3. Small and cheap incremental backups stored to a 99.999999999% durable storage area. But apparently, Amazon messed up the backed up copies as well - instead of producing an outdated, but valid snapshot, they replied to affected customers with:
A few days ago we sent you an email letting you know that we were working on recovering an inconsistent data snapshot of one or more of your Amazon EBS volumes. We are very sorry, but ultimately our efforts to manually recover your volume were unsuccessful.
University Switches To DC Workstations
Huh? The linked products are beyond horrible compared to any decent and MUCH cheaper AC PSU. Just look at any half decent review site, like the awarded products @ hardwaresecrets.
I won't be paying $280 for a 400W DC PSU with 65% efficiency when I can get a whisper silent 500W PSU at 87%+ efficiency for $99 (Enermax Pro87+), or a fanless Seasonic X-400 for $134. The numbers just don't make sense. And yes, these are "honest" wattages, the 400W one actually delivered 600 in overload testing. You do have to do your homework when it comes to buying a PSU, but it really isn't that hard nowadays - aim for 80+ Gold and it's usually safe.
I don't really care how simple and straight forward a DC system is, but if it's costing me 2-3x in purchase and wastes 30% of the input power as heat, count me out.
Are You Sure SHA-1+Salt Is Enough For Passwords?
You have already lost everything. Just edit the login form to mail all the passwords to you and kill the session table. Done. No amount of salting trickery will save you.
The article is kinda stupid too - who in the world still uses a static salt? Most proper frameworks like django use algo$salt$hashed value in the database, so the developer can switch algorithm any time and have it applied on next login and use unique salt values per user. Running SHA1() a hundred times won't turn a polynomial time problem into exponential time, there will always be a better GPU next year which can create rainbow tables just as easy.
Just save the "password" as a three part tuple and use unique salts. You'll be safe until they finally get quantum computers working.
The Care and Feeding of the Android GPU
Let me tell you one thing about that: Java isn't the problem. In my definition of feeding the GPU: triangles/sec, fillrate and OpenGLES objects/sec, Java is just 10% behind a raw C benchmark like glbenchmark 1.1/2.0. They quoted 880kT/s, I managed 750kT/s in non native code. And to get that, you have to carefully feed the GPU with the right batch sizes, don't issue too many state changes, pack things interleaved in the video buffer, don't use dynamic point lights, etc etc. It isn't as bad as an NDS, but the Snapdragon GPU is quite hard to tame.
The problem with using the GPU is that every context switch requires a complete reinitialization of the GL context, even on a PC, alt tabbing into and from fullscreen games takes ages - it's fine when specific applications which requires the speed use it directly, but it's not when going from one activity to another gives you a loading screen.
Animation performance and touch responsiveness? Is that the best he can come up with for such a title? I have no idea what he's talking about, but scrolling the browser works just fine here on a not-so-recent HTC Desire. The only time things break down is when the garbage collector halts everything for a third of a second (see DDMS/logcat messages), and those pauses are reduced to sub 5ms in the new builds. That's tons more useful than rendering surfaces to quads and using OpenGL ES to draw them, and IMO, the Android team made the right decision.
Microsoft Security Essentials 2.0 Released
It says "$8.64 US per user or per device, per year" on this page. It's not free, but it's far from horrible - although the paperwork to purchase your first licenses could be a bit insane for just $100.
Java IO Faster Than NIO
So when you're pushing data as fast as you can through a socket, the old read(byte) or write(byte) are faster? Wow, no kidding.
You do NOT use java.nio (like Jetty's SelectChannelConnector) for maximum throughput. You use it to handle persistent connections, like all those long polling requests via AJAX which return on an event or timeout after a minute. This article is like recommending Apache with its hard limits on how many requests it can serve concurrently over newer, asynchronous servers like Nginx for static media servers with keep alive enabled.
The slides even mentions the C10K problem, but what it doesn't do is mention when to use either technology - async IO for concurrency and endless scaling, and synchronous IO for pushing a 10G Ethernet link to the limits. No wait, the nio setup can do that too, 700MB/s or 5.6Gbit/sec per core on 2008 hardware should be enough to max out anything you can buy now. It's great that synchronous IO can hit 1GB/s, a whopping 30% faster, but useful? I'd say no.
For most users, you don't use either API. Lets be honest here, writing highly concurrent software is hard, why reinvent the wheel when you can get off the shelve software that can do it better? You use Jetty and choose between the SelectChannelConnector or SocketConnector, or choose between Apache or Lighttpd/Nginx depending on the traffic pattern. What you do write is the bit that accepts a whole HTTP request and returns a HTTP response, everything before and after is magic.
Unless you're a file server, each 50k sized HTTP response will require enough work to make sure you run out of CPU or Disk IO long before you hit even the 100Mb/s ceiling in most rack switches. Even if your app is fast, 16 cores x 100ms per request x 50K is only 62 Mbits. Not 5600.
But if you need to scale in concurrent client count, there's no way around async IO. The latest name to watch is Netty. In Plurk Comet: Handling 100,000+ Concurrent Connections with Netty, it scales up to 100000 concurrent connections on a quad core server with 20% CPU load.
Just stop worrying about sockets already, and start worrying about your SQL server suffering a meltdown. Even if you get manage to grow into the Facebook, it's not like using synchronous IO will save you from deploying 30000 servers, it's the application code that's slow. Zero copy, one copy, "string concatenation style twenty copies response building" socket writes don't matter at all, memcpy is cheap compared to a few lines of interpreted code, servers are cheap compared to developers, and never mind the cost of the programming gods giving these presentations.
SeaMicro Unveils 512 Atom-Based Server
The 130W parts are usually only sold to the people who don't care about both performance per watt tho, X5670 is a better choice. Using Intels numbers on power usage, as the 4W/CPU claimed including rest of the system is just ridiculous, the numbers are: 386/8W or 50 points per watt for Atom, or 9356/95 or 98 points per watt for Xeon X5670. Xeons win the race with a 100% margin. Even the X5680 scores 80p/W. If you rig the numbers and use 386/4W, the Xeon still wins. What saving?
Also, no matter how much power they claim to save with this beast, it's nothing compared to virtualizing a rack or more into 3 ESX hosts. With this Atom "solution", you'll just have tons of nodes on 0.00 load again like before, instead of having all the physical servers at a load you're comfortable with. To make things worse, if you run a job on the Atoms, it will run in slow motion because of the "performance" of that CPU - you cannot burst to the full 2/4/8 vCPU allocated, @x3 speed +turbo when needed. It's *much* harder to use 512 slow CPU's than fewer, faster ones.
And never mind running any commercial software on it. The per core/socket license will make it impossible.
Best Solutions For Massive Home Hard Drive Storage?
No OS is immune to fragmentation. On a data store disk with ext3 and tons of files in the 5M range, this is what happened (sudo filefrag *):
rt-01n8vmuqn8xtls6d.w4c: 141 extents found, perfection would be 1 extent
rt-01n9q0j59s1sovam.w4c: 23 extents found, perfection would be 1 extent
rt-01nk9zgmitrsow7g.w4c: 8 extents found, perfection would be 1 extent
rt-01nlrr9aaasuk0yb.w4c: 20 extents found, perfection would be 1 extent
rt-01o3kwc33nhpgqg4.w4c: 41 extents found, perfection would be 1 extent
rt-01o3p9b4x2mfbwem.w4c: 16 extents found, perfection would be 1 extent
rt-01ohtzjkl2z2y3wl.w4c: 17 extents found, perfection would be 1 extent
rt-01orb2yYTsp1vALN.w4c: 1 extent found
rt-01orz1hkb5jzbepv.w4c: 29 extents found, perfection would be 1 extent
rt-01q9x02lltcvogr1.w4c: 62 extents found, perfection would be 1 extent
rt-01qq34rl6exztyx3.w4c: 17 extents found, perfection would be 1 extent
rt-01qrz236bvnim44i.w4c: 14 extents found, perfection would be 1 extent
Solution? None. Just add more drives. "Sequential" reads are now at 15M/sec if you balance the load over the raid1 array, it isn't too bad, but if it was an issue I'd take NTFS with its safe and secure online defragmentation API over Linux anytime.
Website Mass-Bans Users Who Mention AdBlock
Too bad the price difference between them and others is much too large. I'm not talking about tens of percents, but 2 cent per 1000 views vs the 50ct we currently receive for US traffic. For international traffic, divide by 5 to 10. Advertising revenue is bad enough already, unless you serve millions of pages a month, you're not going to break even. Reducing that by another 90% is plain suicide - it's probably more effective to remove them all and add a donation button if you can take a 90% pay cut.
I really would love to support them, but advertisers just do not want to advertise internationally with the same ad. Even brands like Dell have 30 different versions of their ads, one for each country, and depending on where the visitors are, they get their local version with prices in their own currency and the text in their own language - it simply works better that way. If you serve me US prices for Dell, I still have no idea what the final price is in euros after the import duties, VAT and the price difference of Dell NL vs Dell US, so that makes the ad useless for 70% of the visitors. Project Wonderful can never achieve this with their model. This internationalization is one of the main reasons editors on sites have lost control of the advertisements - there's just no way you (or anyone) can review thousands of ads each day...
I hate how the ads market works and I'd love to see a fix for it, but Project Wonderful isn't it. The market is completely in control of the advertising networks, it's hell for us independent publishers; we just get a check every month, and there's nothing we can do to influence it.
I would love to see such a feature, it would make the life of everyone hosting a advertising revenue dependent site a lot easier. The Slashdot standard answer of Noscript/Adblock/Hostfile doesn't solve anything - there will always be users who don't mind advertisement as much as you do, and it's our job to protect them from harm.
Yes, I run a site that have ad revenue. No, I don't deal directly with the scareware crowd, I sell my space to Google, Right Media, AOL etc. But if they make a mistake and a user gets served a bad ad, I'd love to know from which network it came, so I can demand they take down that ad ASAP and if this is repeated, I will take my business somewhere else.
But browsers just lack that information at the moment, so to report an ad, we ask our users to follow the procedure below:
- On IE, press F12 to access the debugger, on Firefox, install Firebug and press F12. In Chrome, use Ctrl-Shift-I.
- In Chrome: select the "html" element and ctrl-c ctrl-v the data into a text file, save it.
- In IE: Ctrl-S inside the developer tools saves a version I can use. Don't use save from the IE screen.
- In Firebug: Select the html tab, select the html node, right click and select copy innerHtml. Paste it into a text file and save.
- Email me the result
It will contain a lot of useless info, but somewhere in between, there's the magical <script> tag plus the generated (=bad) content for verification.
Things To Look For In a Web Hosting Company?
@Second question - if you expect to have to scale up, I'd start with at least a VPS and move up to a (managed) dedicated server when the time comes. Providers using shared hosting setup like apache + setuid fit up to thousands of accounts on to one machine, they won't like it when you're running anything more than a small blog. And on any app of decent complexity, having a ssh shell is a must have for debugging and management. Most shared hosts are quite restrictive on what you can run as well. Quite a few run outdated version of Python and Ruby, and installing extra packages is impossible, so for a web app, a VPS is almost always the minimum you need.
One vendor we considered was Media Temple, their VPS (not the grid service) aren't the cheapest, but their offering looks more polished than the others. The $50 for hosting is probably the cheapest part of the project and if you ever reach the limits of the VPS, there's still plenty of time to switch to a bigger package or another host. By then, you'll have a good idea what the computational requirements are of your site.
We didn't go with them tho - after benchmarking and testing, what we needed was a bit too expensive to rent. We went with 10u rackspace and enough hardware to fill most of it instead. Pro: can't beat the price and total freedom in choice of OS and software. Con: you have to manage everything yourself and pay upfront for all the hardware.
After Learning Java Syntax, What Next?
Now you understand the basics, it's time to put them to a test. Reading books will only get you this far, the rest is experience, experience and more experience.
The more you apply what you've learned, the more you'll discover. If you wanted to create a book indexing application, you'll discover that you'll need to master either Swing or SWT, that you need some kind of storage and learn how to use an flat files for storage or use ORM like Hibernate or write your own SQL queries in JDBC and how to setup a database like PostgreSQL. If you go for an web interface - learn about JSP & containers like Tomcat. Want to do stuff in 3D? Learn how to use OpenGL and the JOGL bindings and read up on basic linear algebra.
Or you could jump from Java to another language. Don't get me wrong - in my opinion, Java is still one of the best designed language, and a huge plus is that you can step through everything to see how it works, right down into the C++ code powering java.nio, but it's a lot of work to get some results on the screen, if you don't count System.out.println(). ActionScript is quite a bit easier to apply, you can get going and write a small game in no time. Python + Django is the perfect web framework for starters, it's almost as powerful but much much easier to learn than JSP+Taglib with their arcane .xml configuration files or plain Servlets.
Once you have seen a few languages and discover their strengths and weaknesses, you'll be able to apply your skills even better. In a digital world, everything is possible. Go create your own future.
86% of Windows 7 PCs Maxing Out Memory
On any recent linux system, free reports only really free memory, not page/disk cache. That one is reported in the cached column. (sorry, no pre html tag @ /.)
total used free shared buffers cached
Mem: 48310 48090 220 0 94 15120
-/+ buffers/cache: 32875 15435
Swap: 16383 25 16358
PostgreSQL's performance depends on the page cache, so you can't see all of cached as free - if you let the cached number drop too much, your disks die.
Malicious Spam Jumps To 3B Messages Per Day
Emails do not always come from humans.
A good deal of the thanks for registering / receipt / mailing lists come from email@example.com, and the body of the email always includes a way to contact a human being if needed. That's quite a bit different from "number unknown", you've just put your own email address in our system - with a please call me back note.
Malicious Spam Jumps To 3B Messages Per Day
As much as I hate spam, I hate overzealous gateways on the internet more. Earthlink for example refuses to receive mail without a valid return address (so no-reply@ must respond to RESP) and sends you one of these:
I apologize for this automatic reply to your email.
To control spam, I now allow incoming messages only from senders I have approved beforehand.
If you would like to be added to my list of approved senders, please fill out the short request form (see link below). Once I approve you, I will receive your original message in my inbox. You do not need to resend your message. I apologize for this one-time inconvenience.
Click the link below to fill out the request:
There's no way I'll waste my time filling in that form, so I've added big warning on the registration page now - sorry users of a overzealous ISP, please disable your spam filter if you can or just use another email address to register from.
Ex-Pirate Bay Admin Launches Micropayment Service
Try a processor with AES-NI. Tom's Hardware got 3570MByte, or 27Gbit/s out of a single Intel i5. With the new Xeons in March, you'll get x3 cores, x2 sockets in a single system.
What I don't get is how you can have such a high requirement for bandwidth but not the budget for enough hardware to just bruteforce this - even a single gigabit connection worth of traffic costs much more than a stack of 1u servers every month.
Restructured Ruby on Rails 3.0 Hits Beta
The same goes for ActiveRecord. It's great in simple cases, but falls apart rapidly when you're developing larger web apps, especially when you're performing complex data retrieval. It gets even worse if you need to optimize that data retrieval. At this point, ActiveRecord becomes a huge pain in the ass, rather than a useful tool.
Hmm, can you explain why? I haven't worked with Rails much, but in Django, if things are too slow like in a really complex query spanning so many tables that the PostgreSQL optimizer chokes, you can hand optimize it easily by either rewriting how you specify it in python, breaking it up into multiple statements. You can also choose to retrieve the data as plain tuples or map/dicts if you need to fetch thousands of rows (100k+, I've no problem with 20k/query the normal way at all). If all fails, plain sql is just 2 statements away, with an easy way to turn the results back into objects.
A good ORM recognizes that there are situations that falls outside of the common/simple use cases and should assist you in the harder things, not work against you.
Paypal Reverses Payments Made To Indians
No it isn't.
PayPal isn't just a way to pay for stuff. It's a cross country and currency method to pay and it has an extensive list of options to integrate it with your front and backend. For example, we use it to: receive money for subscriptions, apply the applicable VAT rate according to user location and hook into the payment system for direct activation of accounts with IPN. Especially the last thing is a killer - I haven't seen any PayPal "alternative" to do that yet, and while integrating with my bank with a merchant account is probably possible, the monthly fee plus transaction cost is much more than just 2.5% + 0.30ct per transaction.
I'd love to see alternative for PayPal, but so far, nothing matches it in flexibility and reach. Most PayPal killers are a joke, the easiest way to identify that is to check the developers section. Instead of hundreds of pages @ PayPal describing everything in detail plus a sandbox, the Gunpal one is a forum with 4 posts? I rest my case.
Apple's "iPad" Out In the Open
A quick comparison:
iPad: iPhone OS. T91MT: Windows 7 home premium included, runs any other x86 OS.
iPad: No multitasking. T91MT: Yes. Even better with a $40 upgrade for a 2GB stick of ram.
iPad: Safari only. T91MT: Firefox and Chrome for me, oh, and it has flash too. IE is also an option but noone uses that :P
iPad: App store only. T91MT: Anything x86.
iPad: Has notepad. T91MT: can run the full version of Office / Onenote, Windows Journal is free. Best hand writing recognition, has math support.
iPad: SketchBook Mobile. Finger only. T91MT: SketchBook Pro, Corel Painter, Inkscape and lots more. Has stylus, only lack a pressure sensitive pen.
iPad: Dockable keyboard. T91MT: Convertible tablet.
After a few weeks of serious use, I must say that while having a touch screen is a major plus, you won't be entering a lot of text with the on screen keyboard. Seriously, even a longer url can take a while to write with tapping letters or writing and correcting letters, and complex passwords have a random chance to fail. I only use the tablet mode during note taking, browsing the read-only web, watching videos and reading pdfs in full screen mode, the speed difference in text entry is just too large. Be ready to bring the wireless keyboard with the iPad all the time.