Longtime Linux Advocate Don Marti Tells Why Targeted Ads are Bad (Video 1 of 2)
I used to believe this, and then came Stack Overflow. One day I was reading an answer on SO, and it hit me: compare Stack Overflow, which is fully ad supported, with it's arch rival Experts Exchange, which though it has ads, is mainly subscription supported. Which would you rather use?
Hybrid Hard Drives Just Need 8GB of NAND
I saw these two excerpts:
> "Research found that normal **office computers**, not running data-centric applications, access just 9.58GB of unique data per day
> cease production of 7200 RPM **laptop drives** at the end of 2013, and just make models running at 5400 RPM
So let's take research on one market segment (office computers) and apply it to a completely different market segment (consumer laptops). I'm sure that'll work out just fine.
MagicPlay: the Open Source AirPlay
Even if they get this technically perfect, it can never work, because it will never be supported by Apple.
I can already use AirPlay mirror to transmit not only my iPad/iPhone or Mac screen, but through additional software I can also mirror my Windows or Linux Desktop and even an Android tablet or phone. Oh, and the receiver doesn't have to be an AppleTV. A Mac or PC can receive streams as well. You can even hack a Pi or XBMC to receive AirPlay, too. The only major missing device categories today are Windows RT and Blackberry 10... and if they know what's good for 'em, they'll open up API's to make this possible. Apple's AirPlay is already the lingua franca of wireless A/V, even more so than technologies like WiDi.
AirPlay today is already *everything* MagicPlay wants to be, with the exception that MagicPlay has virtually no chance of ever working on an iPad.
The only way this changes is if Apple decides to go legal on the third party tools making this possible... tools which currently have their blessing... a move which would make no sense, as the core technology is too easy to duplicate (as proven by this very story). Moreover, the move would make some new enemies in tech circles and especially in education (historically an Apple stronghold), because at that point there will be no hope for places like conference centers/auditoriums/classrooms to easily have a single generic point of contact for wireless display.
I will grant that if Apple does go for the legal option, MagicPlay could be well-positioned as a single alternative supported by all the competitors: Android, Windows (regular and extra-crispy metro), linux, etc
Snowden Is Lying, Say House Intelligence Committee Leaders
Sadly, an (almost certainly soon to be) convicted felon has more credibility than any member of that committee right now.
That's a strong indication that every single one of them, regardless of party, ought to resign. If you're not credible, you're not qualified.
How Maintainable Is the Firefox Codebase?
... that have no meaning at all.
Impacting 8 files on average would be horrible... for a project with 8 files. But how many is that relative to the size of Firefox?
11% of files in Firefox are highly interconnected... but how does that compare to other projects of similar scope?
The one value in that summary that had any meaning at all was the comment that the percentage of interconnected files "went up significantly following version 3.0". That at least has some relative measure we can use as a base.
MySQL Founders Reunite To Form SkySQL
You could also call it the poster child for closed source software: a company chose open source because it was cheaper in the short term, and now that they've outgrown it they're stuck devoting countless engineering hours to make the solution work anyway. A closed source system might have cost (a lot) more up front, but may also have required (a lot) fewer engineering resources long term.
Not that I'm saying this definitively true for the Facebook case study. Rather, my point is we just don't know. Calling this case a poster child for anything is nuts, because we just don't have near enough information. It's funny that everyone keeps bringing Facebook into this at all, being so far from the typical deployment.
That out of the way, there's really no good reason to use MySQL or it's derivatives any more. Ever. Postgre is superior in pretty much every way. Sure, MySQL is good enough for some things, but that's like saying you'd choose a CRT monitor for your computer when you have a perfectly good LCD right there ready to go. Sure, CRTs still work just fine, but no one in their right mind would choose one over an LCD unless they have some really exotic requirements.
Internet's Energy Needs Growing Faster Than Efficiency Gains
This is definitely happening. It is a factor, and an important one. But let's not forget our Economics. Economics claims that the world's appetite for energy has some level of equilibrium to it, such that as energy is saved from one area (such transporting rental DVDs or bill envelopes) it's likely picked up by another area... and it's almost impossible to spot the corresponding increase.
The same effect applies to nearly every effort so far at reducing carbon emissions. There are lots of things aimed at specific places like cars or power plants, but not enough yet to actually change appetite, and truly alter the equilibrium state.
What Will The Expanding World of ChromeOS Mean For Windows?
This is what Surface RT was _supposed_ to be. Too soon to say whether it's working... it's selling slowly, but it is selling.
Microsoft Fails Antivirus Certification Test (Again), Challenges the Results
Bad engine with current definitions beats a good engine with out of date definitions.
The thing of MSSE is that it stays current on it's own. I come across machines running the other products all the time that are months out of date, because someone bot the product one time or just stuck with the trial that shipped with their computer, and couldn't be bothered to re-subscribe later on. With MSSE, there is no risk of that, and for this reason alone I'd rank it above most of the other products.
That said, I give good scores to AVG for the same reason, and to a lesser extent also to AVast (still requires re-registration every 14 months, but at least it's free, which removes one barrier to keeping it current).
W3C Finalizes the Definition of HTML5
Odd. Some of us have been "testing" it in production for quite a while now.
Ask Slashdot: Technical Advice For a (Fictional) Space Mission?
A few thoughts:
If you're thinking colonization, plan to use and coordinate between more than one ship. You're moving at incredible velocities, but need to avoid a collision and need to not have long waits between arrival times, even though they'll all be following similar paths. An accident with a ship early in the launch could leave debris that is problematic for those that follow.
Even with a good sized fleet, the ships would have to be immense... too big to easily launch from the Earth's surface in one go. Use the moon as staging and assembly area where modules are pieced together to make a larger whole.
Talk about advances in using nuclear energy for propulsion. I don't mean (just) nuclear electricity generation, and I don't mean Orion [http://en.wikipedia.org/wiki/Project_Orion_(nuclear_propulsion)]. I mean real research into more efficiently using the massive energy created from nuclear reactions to propel the craft in space. The steam=>turbine process we use today is just not very efficient at converting that energy into something usable, but in space there might be more options. Because of the risk of contamination, use conventional rockets to first reach the moon and for the initial escape when leaving the moon.
Just preparing the launch from the moon is a process that will take nearly a decade, so go epic and start somewhere in the middle.
Know how to slow down. You'll want to get the ship going as fast as you can. One benefit of somehow harnessing a nuclear reaction is the ability to constantly apply thrust. But you need to plan for a way to slow down as well. Use your destination's gravity to help. Since you're bringing a fleet, you might have a couple ships pass just too far from the planet, miss their chance, and shoot off into space with no hope of recovery before life support runs out.
Destination: you need a place where you can sustain, indefinitely, near-Earth temperature and gravity, water, breathable atmosphere, and the radiation protection provided by the outer layers of the atmosphere. Such a place does not exist outside of Earth in our Solar system, so you'll have to make one up.
First up is heat. The only thing in the solar system that puts out enough is the sun. Unfortunately, it has far too much heat for any planetary orbit closer in, and far too little for any planetary orbit further away. We're lucky Earth is the distance from the sun that it is. We'll have to make do the a gas giant, since they also radiate heat. They don't radiate enough heat, but it's the best we can, few people will know this, and it is fiction, after all. I propose Saturn. Prior missions to Saturn have taken a little over three years. With the upgraded propulsion, look to do it in about 1/4 that.
Next up is gravity. Humans can do very well on low-G, with astronauts spending over year in space without too many ill effects, so anything 1/4 the size of earth on up is fine. You ought to be able to pick a nice rock out of Saturn's asteroid belt. It's good to be a little on the small side, because that will make it easier for your colonists to leave their new home and mine Saturn's rings for resources. For water, let's suppose the particular rock is chosen because it just happens to have a nice supply underground. Your colonists will actually mine for water.
The asteroid will also have a cave system. Your near-future colonists will have no hope of terraforming a breathable atmosphere, so their plan will be to seal existing caves, and to create oxygen atmosphere within those caves by boiling it out of the water they mine. Good thing they brought a lot of uranium. The cave system will also provide the needed radiation protection in place of an atmosphere.
I think that about covers it for now.
Bug Forces Android Devices Off Princeton Campus Network
From the sounds of it, Princeton doesn't even do NAT. They have a large block (probably A block) of real, routable internet IPs they hand out.
As for the proxy, a lot of places I see are actually moving away from using a proxy that requires you to update setting on the client, and rather using a so-called transparent proxy (really just a router or bridge) as the default gateway set by dhcp that works because the only way to get to web is for traffic to go in one interface and go out the other.
Bug Forces Android Devices Off Princeton Campus Network
Reading the article, they're using 1hour and 3hour dhcp lease times! The easy fix here is simply to bump the lease times back up to a few days, which used to be the default everywhere. Then IP addresses remain relatively static, and the broken behavior won't cause nearly as much of a problem.
I understand there's a reason the lease is made that short, but I don't agree with that reason. What happens is that because wifi networks are effectively unswitched, you need to limit the scope/size of each individual network, to avoid broadcast traffic (like dhcp requests) consuming a significant portion of the available throughput. Thus, wifi SSID on campuses often tend toward and mere /24 or /23 address pool per SSID and area. But, you'll likely have a lot more than 512 devices move through that area over time. To compensate for this, a lot of places have lowered their lease times. I've seen lease times as short as 5 minutes. The justification is that most of these devices are mobile devices that only tend to be online for a few minutes at a time anyway.
In my opinion, the result of this action is you have now just created a ton of the very kind of traffic that your were trying to avoid in the first place, as all your fixed devices and even some of your mobiles have to frequently renew their leases. I think a better approach is to allow a much larger address space per wireless zone. This way, you can have longer lease times, spend less wireless bandwidth handling dhcp traffic, and keep a more static IP pool that won't have as much of a problem with the broken dhcp behavior here. The downside is that the zone size has to stay the same -- you still don't want too many devices in the same zone at the same time (remember: allocated address space != actual online devices) -- and so you need to reserve a whole lot more addresses, maybe even switch to using the 10.0.0.0 space for your wifi to get enough for a larger campus (though that space is _easily_ large enough).
Unfortunately, IIRC a couple of the major vendors have their controllers/access points set up to assume /24 zones, and that makes implementing this difficult.
Generic PCs For Corporate Use?
Have you considered off-lease machines? At the school where I'm admin we get Core 2 Duos with 2GB RAM and a 1 yr warranty for $280 each.
DrStrangluv hasn't submitted any stories.
DrStrangluv has no journal entries.