Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Comments

top

Operating Systems Still Matter In a Containerized World

Junta Re:what are you smoking? (125 comments)

As for I/O, you can pass through PCI devices in to the guest for pretty-much native networking performance.

Of course, that comes with its own headaches and negates some of the benefits of a VM architecture. Paravirtualized networking is however pretty adequate for most workloads.

It's not like you have to do VM *or* baremetal across the board anyway. Use what makes sense for the circumstance.

yesterday
top

Operating Systems Still Matter In a Containerized World

Junta Re:Of Course They Do! (125 comments)

In my experience, KSM hasn't helped as much as it promised. It depends heavily upon the workloads. It also impacts memory performance. If things are such that KSM can be highly effective, then a container solution would probably be more prudent.

yesterday
top

Operating Systems Still Matter In a Containerized World

Junta Re:Of Course They Do! (125 comments)

CPU throughput impact is nearly undetectable nowadays. Memory *capacity* can suffer (you have overhead of the hypervisor footprint), though memory *performance* can also be pretty much on par with bare metal memory.

On hard disks and networking, things get a bit more complicated. In the most naive way, what you describe is true, a huge loss for emulating devices. However paravirtualized network and disk is pretty common which brings it in the same ballpark as not being in a VM. But that ballpark is relatively large, you still suffer significantly in the IO department in x86 virtualization despite a lot of work to make that less the case.

Of course, VM doesn't always make sense. I have seen people make a hypervisor that ran a single VM that pretty much required all the resources of the hypervisor and no other VM could run. It was architected such that live migration was impossible. This sort of stupidity makes no sense, pissing away efficiency for no gains.

yesterday
top

Operating Systems Still Matter In a Containerized World

Junta A horrible nightmare... (125 comments)

So to the extent this conversation does make sense (it is pretty nonsensical in a lot of areas), it refers to a phenomenon I find annoying as hell: application vendors bundle all their OS bits.

Before, if you wanted to run vendor X's software stack, you might have to mate it with a supported OS, but at least vendor X was *only* responsible for the code they produced. Now increasingly vendor X *only* releases an 'appliance and are in practice responsible for the full OS stack despite having no competency to be in that position'. Let's see the anatomy of a recent example of critical update, OpenSSL.

For the systems where the OS has applications installed on top, patches were ready to deploy pretty much immediately, within days of the problem. It was a relatively no-muss affair. Certificate regeneration was an unfortunate hoop to go through, but it's about as painless as it could have been given the circumstances.

For the 'appliances', some *still* do not even have an update for *Heartbleed* (and many more didn't bother with the other OpenSSL updates). Some have updates, but only in versions that also have functional changes in the application that are not desired, and the vendor refuses to backport the relatively simple library change. In many cases, applying an 'update' actually resembles a reinstall. Having to download a full copy of the new image and doing some 'migration' work to have data continuity.

Vendors have traded generally low amounts of effort in initial deployment for unmaintainable messes with respect to updates.

yesterday
top

AMD Launches Radeon R7 Series Solid State Drives With OCZ

Junta IIRC... (64 comments)

nVidia actually did sell it pretty well though. It wasn't in any way a better experience, but the brand name did actually carry the product as I recall.

It was one of the reasons that the relationship between Intel and nVidia went so far south, Intel made it impossible to have third party chipsets and nVidia lost some revenue opportunity. People rightly critical of the technical aspects were not the downfall of the product line, Intel locking down their platform was.

In short, this stuff *could* in theory fly. In practice, I don't think AMD has the brand strength. People still seem to look to nVidia as 'the go-to' brand more often than AMD in the PC component world.

2 days ago
top

Apple's App Store Needs a Radical Revamp; How Would You Go About It?

Junta Re:Slashdot proves it! (249 comments)

I don't make money for insightful comments

Looks like someone didn't get the memo...

Don't worry, I can fix it. Just send me your bank account number and your social security number for verification...

about a week ago
top

Cisco To Slash Up To 6,000 Jobs -- 8% of Its Workforce -- In "Reorganization"

Junta Re:Courage... (206 comments)

The funny thing in your story is that the word that makes it narrow down the most is the pronoun 'she'. I would guess you either work for HP or IBM. Massive stock buybacks and continual layoffs is the modus operandi of most of the companies, but female ceos are a little bit more rare. Of course they do and say the exact same things so they could all probably replace their CEOs with chatbots that just always says 'buyback some more stock and layoff more people' and no one would notice.

about a week ago
top

Cisco To Slash Up To 6,000 Jobs -- 8% of Its Workforce -- In "Reorganization"

Junta Courage... (206 comments)

If we don't have the courage to change

It can be debated as to whether this is a necessary thing or a prudent thing or whatever, but regardless of those debates, this s a pretty stupid thing to say. I don't think a CEO should ever characterize their decision to terminate other people's jobs as 'courageous'. There really isn't anything remotely courageous about any of the strategy he laid out. It's not even particularly bold or daring, it's basically the exact thing every executive of every tech company has been saying about their respective companies now.

Not having much of a horse in the race (not working for cisco or even a cisco client), I can't comment on whether it's the right choice or whatever, but it really rubbed me the wrong way to see him refer to layoffs as an act of courage.

about a week ago
top

Ask Slashdot: Corporate Open Source Policy?

Junta Re:Rust (57 comments)

One issue is that generally such projects are actually pretty niche and get developed with only that niche in mind. There simply isn't a pool of eager developers to tackle only your specific issue.

If you can think about modularity and develop some key components that are more generally useful as distinct projects, you may have better luck.

But overall, open source building a large development community for any given single project is generally the exception rather than the rule, even if you do your best. Even very ubiquitous projects that play a role in nearly every linux system often has maybe one or two developers that are really familiar with it.

about a week ago
top

Ask Slashdot: Corporate Open Source Policy?

Junta Re:Better ways to do it. (57 comments)

If you do use the GPL *and* have copyright assignment, there actually could be a case made that you dual license it: GPL for those that play open source and proprietary commercial for others. This is the 'get free coders to do work for you' business model that seems pretty disingenuous, but at least there is a logic to a corporate sponsored project going for GPL.

What surprises me is that most scenarios where corporations pick the license, they pick a BSD style license. I can understand them wanting that property in *other* people's code, but surprise they wouldn't want more assurance that their work wouldn't come back to compete with them in a commercial way when they have a choice.

about a week ago
top

Ask Slashdot: Corporate Open Source Policy?

Junta Re:CLA (57 comments)

CLA with copyright assignment opens the door to have your contributions abused by the copyright holders.

CLA without copyright assignment is usually just the 'project' covering their ass in case of problematic contributions that infringe copyright or patents.

about a week ago
top

Ask Slashdot: Corporate Open Source Policy?

Junta Re:CLA (57 comments)

The general intent of many CLAs is some stuff to make the contributor attest that he isn't doing something like injecting patented capability or violating someone's copyright. The key distinction between an open source licensed product being redistributed by someone who adds problematic capability versus having that capability injected directly is that the curator of that project is the one that gets sued in the latter case. So if stuff is bolted on but not coming back, the weaker assurance of GPL or BSD style license is acceptable because the risk is not the project owners. The statement is certainly not sufficient to be confident that something isn't wrong, but it's a stronger basis to pass on some culpability to the contributor in the event of issues.

The sort of CLA you are talking about are the ones with copyright assignment. The most prominent example of this is actually the FSF requiring copyright assignment of any accepted contribution. These can be employed when a company or organization wants to reserve the right to modify licensing. In the FSF case, this is why they can change license terms from GPL2 to GPL3, where in a project like the kernel, they cannot change the license because there are too many copyright holders. I actually don't know of a corporate sponsored CLA involving copyright assignment.

I had previously assumed CLA implied copyright assignment until I was forced to actually cope with a couple of CLAs and looked more carefully.

about a week ago
top

The Quiet Before the Next IT Revolution

Junta Re:A rather simplistic hardware-centric view (145 comments)

Software reliability over the past few decades has shot right up.

I think this is a questionable premise.

1) Accurate, though has been accurate for over a decade now
2) Things have improved security wise, but reliability I think could be another matter. When things go off the rails, it's now less likely to let an adversary take advantage of that circumstance.

3) Try/Catch is a potent tool (depending on the implementation it can come at a cost), but the same things that caused 'segmentation faults' with a serviceable stack trace in core file cause uncaught exceptions with a serviceable stack trace now. It does make it easier to write code that tolerates some unexpected circumstances, but ultimately you still have to plan application state carefully or else be unable to meaningfully continue after the code has bombed. This is something that continues to elude a lot of development.
4) Actually, the pendulum has swung back again in the handheld space to 'apps'. In the browser world, you've traided 'dll hell' for browser hell. Dll hell is a sin of microsoft for not having a reasonable packaging infrastructure to help manage this circumstance better. In any event, now server application crashes, client crash, *or* communication interruption can screw application experience instead of just one.

5. Virtualized systems I don't think have improved software reliability much. It has in some ways make certain administration tasks easier and beter hardware consolidation, but it comes at a cost. I've seen more and more application vendors get lazy and just furnish a 'virtual appliance' rather than an application. When the bundled OS requires updates for security, the update process is frequently hellish or outright forbidden. You need to update openssl in their linux image, but other than that, things are good? Tough, you need to go to version N+1 of their application and deal with API breakage and stuff just because you dared want a security update for a relatively tiny portion of their platform.

6. I think there's some truth in it, but 32 v. 64 bit does still rear it's head in these languages. Particularly since there are a lot of performance related libraries written in C for many of those runtimes.

7. This seems to contradict the point above. Python pretty well fits that description.

8. This has also had a downside, people jumping to SQL when it doesn't make much sense. Things with extraordinarily simple data to manage jump to 'put it in sql' pretty quickly. Some of the 'NoSQL' sensibilities have brought some sanity in some cases, but in other cases have replaced one overused tool with another equally high maintenance beast.

9. True enough. There is some signal/noise issue but better than nothing at all.

I think a big issue is that at the application layer, there has been more and more pressure for rapid delivery and iteration, getting a false sense of security from unit tests (which are good, but not *as* good as some people feel). Stable branches that do bugfixes only are more rare now and more and more users are expected to ride the wave of interface and functional changes if they want bugs fixed at all. 'Good enough' is the mantra of a lot of application development, if a user has to restart or delete all configuration before restart, oh well they can cope.

about a week ago
top

The Quiet Before the Next IT Revolution

Junta Re:A rather simplistic hardware-centric view (145 comments)

www.scalemp.com does what you request.

It's not exactly all warm and fuzzy. Things are much improved from the Mosix days in terms of having the right available data and kernel scheduling behaviors (largely thanks to the rise of NUMA architecture as the usual system design). However there is a simple reality that the server to server interconnect is still massively higher latency and lower bandwidth than QPI or HyperTransport. So if a 'single system' application is executed designed around assumptions of no worse than QPI inter-process connectivity, it still won't be that nice and an application managing the messaging more explicitly will fare better.

But if you have to use an application that can do multi core but not multi node and force it to scale *somewhat*, ScaleMP can help things out significantly.

about a week ago
top

About Half of Kids' Learning Ability Is In Their DNA

Junta Re:Correlation not Causation (227 comments)

In this specific case we can split hairs, but in the end they are singling out genetics in a relatively large set of uncontrolled variables as the facet to focus on. Yes, like any good scientist the distinction is made, but pretending that aside from genetics a pair of fraternal and identical twins have *no other* fundamental different life experiences is a long shot that does strongly suggest the belief in a causative hypothesis and that they conducted this research with that assumption in mind. Identical twins raised together I suspect generally has more interesting distinguishing features than merely identical genetic reality compared to fraternal twins.

I personally suspect the hypothesis is true, that genetics plays a major role. However, *this* particular study is almost certainly full of non-genetic correlations that line up with the genetic correlations, making it difficult to say anything for sure on the genetic front versus another variation on the environmental front.

about two weeks ago
top

About Half of Kids' Learning Ability Is In Their DNA

Junta Re:Correlation not Causation (227 comments)

"The correlation between reading and mathematics ability at age twelve has a substantial genetic component

The problem is "all siblings presumably experience similar degrees of parental attentiveness, economic opportunity and so on" which is of course very unlikely to be a

I think the issue at hand is it isn't quite controlled well enough to trumpet the genetic component as *the* correlation of interest. Other factors are handwaved away by saying "all siblings presumably experience similar degrees of parental attentiveness, economic opportunity and so on". Anyone who has grown up alongside twins (there actually were a few sets of twins in my town growing up, two sets of them identical, one set mixed gender) knows this is too much to presume. When people look identical, there is a much stronger expectation that they *are* fundamentally identical. The identical twin sets both had rhymed names, but the other twins did not. Parents and teachers and fellow kids more naturally treat fraternal twins like any other set of siblings, but identical twins do not receive the same experience. People assume they like the same things, they should hang out together, they *should* be good at the same things. Many believe there is some mystical/telepathic link between identical twins. Fraternal twins are 'just siblings', to the extent that until explicitly mentioned no one may even realize they are *twins*. Identical twins are blatantly obvious from the moment you see them and trigger a large amount of preconception before anyone so much as utters a word. All these societal expectations undoubtedly have *some* impact on their development that shouldn't be so casually dismissed.

Basically, there is no reason to believe identical and fraternal twins receive a comparable life experience in aggregate when raised together. With that in mind, the study should be saying there is a correlation for identical versus fraternal twins rather than 'there is a correlation with genetics'.

about two weeks ago
top

Leaked Docs Offer Win 8 Tip: FinFisher Spyware Can't Tap Skype's Metro App

Junta Re:Switch away from Skype and Windows (74 comments)

But at the whole, UEFI Secure Boot along with Windows 8 signed boot-loader and OS is *very* hard to circumvent.

If you are paying attention during boot, and the attack comes from within the OS. Of course, MS could have afforded the within the OS protection themselves by being very special in how they treated the system partition without requiring firmware to verify it. If you have full control of the console and/or device, you can do exactly what you describe, boot a valid OS using a malicious configuration designed to rootkit the OS that's there or impersonate the OS that was supposed to be there to gain information about accessing the presumably cloned disk.

Because it is actually pretty ineffectual against an adversary that physically controls your entire system or your disk contents, I think a different design would have been better. Secure boot is too open ended to afford sufficient protection and yet too much a pain by being not quite open ended enough to allow OS vendors without Microsoft blessing. I think Secure Boot should have been done by the key being installed to firmware at initial OS install time. The first OS install getting to 'take ownership' of the platform, and that key being *the* key to trust. This would have allowed Microsoft to put in a Microsoft key and say 'screw trying to certify things like grub'. Installing a different OS after a first would have required going into firmware to unclaim the platform to let the new bootloader claim it on the install of that system.

I'm actually ok with TPM and how things like Bitlocker leverage the TPM. The Secure Boot scheme reeks of too much inconvenience for inadequate security compared to what *could* have been done.

about two weeks ago
top

Leaked Docs Offer Win 8 Tip: FinFisher Spyware Can't Tap Skype's Metro App

Junta Re:Switch away from Skype and Windows (74 comments)

There's a few things that seem off in that statement...

IIRC, Secure Boot didn't actually hook into the TPM.

Another, I'm not sure what you imply with 'modify the TPM'. You can have perhaps the TPM bind some stuff that the legitimate user wouldn't want you to do but you couldn't defeat sealing to a sufficient set of PCRs by having os level control of the TPM facilities afaik.

about two weeks ago
top

Leaked Docs Offer Win 8 Tip: FinFisher Spyware Can't Tap Skype's Metro App

Junta Re:Switch away from Skype and Windows (74 comments)

Windows 8 Secure boot is a pretty flimsy facility that says 'yep, this code was blessed by microsoft'. It does nothing to vouch for whether the configuration leading up to or the configuration of the payload is what you actually want (e.g. a specific user expects they hve put in Windows 8, but instead Red Hat loading with malicious configuration would be a sort of misbehavior that SecureBoot does nothing for).

Of course, the proposed scheme isn't exactly nice. Notably handwaving about 'file is known safe'. In an open, diverse ecosystem this is highly impractical. SELinux errs on the side of letting some stuff slide and still gets enough false positives to frustrate a user trying to use some legitimate applications. These schemes start from a premise of 'if you know everything the system is ever supposed to do, then....' which is unlikely. Doing this from firmware to kernel may be feasible and a way to declare a 'known good state' to start some instrumentation in the common case, but going more into the wide open user space with overly specific restrictions and there will be difficulties. Maybe in some very specific special purpose applications, but in a general purpose system the universe of legitimate things to do is just not well defined enough.

about two weeks ago
top

Satya Nadella At Six Months: Grading Microsoft's New CEO

Junta Straightforward guessing where he wants to go.. (151 comments)

Too early to try to measure 'success'.

shows strong strategic leadership, particularly around the cloud

So far there isn't anything particularly different about his time there as far as degree of success in the 'cloud' market. In terms of Azure, it's a tricky proposition for a company that is ostensibly a high-margin company. Going toe to toe with Amazon, a company that has repeatedly shown it is not shy about operating on margins so thin they are at high risk of actually operating at loss in a given quarter (I would say the same thing about IBM's foray into the space).

I suspect Windows is there to stay for the foreseeable future (it is about the only product they have with a pretty proven market acceptance that is also consistently profitable). Devices I think will go away, as it should. They let Google and Apple get ahead in the broad ecosystem strategy and the vertically integrated strategy respectively, leaving no room for MS really. MS has to figure out how to somehow undercut Android cost for partners or give up on owning the underlying platform. Either way making devices in house will not be winning them any favors, Apple has shown the most success and the most loyalty and yet their share still is going down in the face of the huge ecosystem of android vendors.

xBox would make more money as something sold to a third party, who probably would do better with it than microsoft has.

about two weeks ago

Submissions

Junta hasn't submitted any stories.

Journals

Junta has no journal entries.

Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>