Announcing: Slashdot Deals - Explore geek apps, games, gadgets and more. (what is this?)

Thank you!

We are sorry to see you leave - Beta is different and we value the time you took to try it out. Before you decide to go, please take a look at some value-adds for Beta and learn more about it. Thank you for reading Slashdot, and for making the site better!



A War Over Solar Power Is Raging Within the GOP

dlapine Re:Fucking rednecks (1030 comments)

The thing is, I can put solar on my house, and I will be to able to generate enough power, on occasion, to have some extra to put back on the grid. With the right configuration and local storage, I can even go off the grid. As a consumer, the other options you mention are things I can't do. Sure, solar is more expensive per KWH, but at least it's doable for lots of homeowners.

Separately, you may not have noticed that the Republicans have held effective veto power over new legislation in the Senate until just yesterday. Thus, making the claim the Republicans (even with a minority in the Senate) can be held somewhat responsible for lack of progress in the area seems reasonable.

about a year ago

1.21 PetaFLOPS (RPeak) Supercomputer Created With EC2

dlapine Re:High Throughput Computing not HPC (54 comments)

Sure, that's why I said that this is an advance. If you don't need HPC resources, this can work really well. But, you have educate scientists and researchers on the difference, and this article doesn't do that well enough.

about a year ago

1.21 PetaFLOPS (RPeak) Supercomputer Created With EC2

dlapine High Throughput Computing not HPC (54 comments)

While this a nice use of Amazon's EC to build a high throughput system, that doesn't translate as nicely to what most High Performance computing users need- high network bandwidth, low latency between nodes and large, fast shared filesystems on which to store and retrieve the massive amounts of data being used or generated. The cloud created here is only useful to the subset of researchers who don't need those things. I'd have a hard time calling this High Performance Computing.

Look at XSEDE's HPC resources page. While each of those supercomputers has something special about the services they offer (GPU's SSD's, fast access, etc), they all spent a significant portion of their build budget on a high performance network to link the nodes for parallel codes. They also spent money on high performance parallel filesystems instead of more cores. Their users can't get their research done effectively on systems or clouds without those important elements.

I think that it's great that public cloud computing has advanced to the point where useful, large-scale science can be accomplished on it. Please note that it takes a separate company (CycleCloud) to make it possible to use Amazon EC in this way (lowest cost and webapp access) for your average scientist, but it's still an advance.

Disclaimer: I work for XSEDE, so do your own search on HPC to verify what I'm saying.

about a year ago

Ask Slashdot: Do You Move Legal Data With Torrents?

dlapine Linux ISO's mostly (302 comments)

At work I need to install several different types/versions of linux OS's for testing. I always torrent the ISO as a way of "paying" for the image that I'm using.

A few years back, we did some experimenting with torrents over the Teragrid 10GBe backbone, to see how well that worked over the long haul between IL and CA. With just 2 endpoints, even on GBe, it wasn't better than a simple rsync. We did some small scale test with less than 10 cluster nodes on one side, but still not as useful as a Wide Area filesystem we were testing against. Bittorrent protocols just aren't optimized for a few nodes with a fat pipe between them.

I am interested in looking at the new Bitorrent Sync client to see how thanks for our setup. We have many users with 10's of TB's of data to push around on a weekly basis.

about 2 years ago

Richard Stallman: Limit the Effect of Software Patents

dlapine Re:Just how would this work? (257 comments)

If the purpose of patents is "To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries" then no, I don't see how restricting patents to physical implementations (not software on a general purpose computing device) utterly defeats that purpose. Nothing restricts the author from enforcing his patent on physical reproductions, he just can't claim that a non-physical implementation is a violation.

Can you give any examples where this change would stop or slow scientific progress?

more than 2 years ago

Ex-NASA Employees Accuse Agency of 'Extreme Position' On Climate Change

dlapine Ex-NASA employees (616 comments)

I take some relief in noting that these are "ex-NASA" employees.

Per the article, it seems that these guys mostly worked at the Texas-based Johnson space center:

"Keith Cowing, editor of the website NASA Watch, noted that the undersigners, most of whom have engineering backgrounds, worked almost exclusively at the Houston-based Johnson Space Centre, a facility almost entirely removed from NASA's climate change arm."


Why is it that there are so many amateur climatologists in Texas who know so much, but publish so little? I wonder if these gentlemen even bothered to visit the site of the "Plants Need CO2" sponsor, Leighton Steward, to see who also agreed with their opinions. I'm not linking to that site, and I'd surely want to avoid association with anyone with ideas like that.

Maybe Steward just punked them. Yep, that's go to be it.

more than 2 years ago

Ask Slashdot: Where Are the Open Source Jobs?

dlapine Government and Higher education (506 comments)

Don't overlook positions in government or higher education. Besides being OS agnostic in many cases, there are universities all over the country, not just the SF area.

Want to travel a lot, have a nice career path and instant usefulness due to linux knowledge? Try DISA. I'm not sure if they are still hiring for their intern program (the Army uses intern in a different way than business IT), but it was great opportunity for some people I know. DOE is another area that looks for reliable linux knowledgeable sysadmins.

Look at the top500 list and see how many big clusters are run by Universities and their affiliates. Then check out how many of those systems use windows- and then laugh. Higher education also runs a lot of smaller systems on linux. Lots of positions starting to open up there. if you have cluster admin knowledge, you're a shoo-in. If not, take a lower position where they do run clusters and let them know that you'd be interested in moving up.

Disclaimer- yes, I work at NCSA at the University of Illinois, Urbana, and yes, we have some linux positions open. Do the legwork yourself, however- it'll make you look smarter.

more than 2 years ago

Best Use For A New SuperComputer (HPC)

dlapine HPC Planning (3 comments)

You're about to receive a large amount of hardware from the vendor, and you haven't decided upon which GPU's to use, which interconnect for communications, what OS would be appropriate, or the types of workloads your users will be running (beyond your base set)? Really? If that's the case, no amount information from slashdot will solve your problems.

If you have no interconnect chosen, how will you rack the systems in the case that cable lengths are an issue, as they are for IB? Do you even have nodes that natively support both 10GBe and IB? I highly doubt it. What about you core network switches- 1200 ports (plus switch fabric) of IB or 10GBe might cost more than those 1200 nodes. You're also talking about adding GPU's and a high speed network adapter to each of 1200 nodes; what kind of manpower do you have for the task of installing 2 PCIe cards per node for 1200 nodes. I'm assuming that you'd want to be in operation sometime before Christmas. I won't even ask about what kind of large scale storage you have planned. I shudder to think of what power and cooling requirements you've already overlooked or made impossible.

Who's your vendor? If they really let you purchase 1200 nodes without any sort of planning, they should be dragged behind horses and shot. What a waste of money.

I'm sorry to be so negative, but you guys really screwed the pooch on this one. When you are designing a supercomputer, the very first thing to be decided is what the use cases are, especially if you're trying to generate revenue from the system. You have a limited amount of money to buy computing power, interconnect, storage, and facilities, so you have to optimize your purchase in those areas around to the expected use of the system. Not to mention operating costs.

Sheesh. I hope you're just pranking us.

more than 3 years ago

Michael Hart, Inventor of the E-book, Dead At 64

dlapine I knew him (70 comments)

My boss suggested that I attend a weekly "geek lunch" that a group of the older computer savvy fellows held at the U of I's Beckman Institute and met him there. I was aware of Project Gutenberg before that but hadn't used it much. Michael was a good advocate for ebooks before anyone got around to coining that particular terminology. The last few times we met, I remember him being very excited as he had samples of various new ebook readers to try out. He was testing them to see well they integrated the Gutenberg Project and was glad that more people would have easy access to it.

Over last fall, the group met weekly and I helped him with the process of making digital copies of the Gutenberg archive on different filesystems on individual drives. The entire Gutenberg archive is about 300GB with everything extracted and we could dual format a 750GB drive to fit a copy on NTFS and another one on ext3. That was a fun experience; most people don't get to play with a real life 300GB data set.

I hadn't been to a meeting in a while, darn it. I'll miss him.

more than 3 years ago

China Space Official Confounded By SpaceX Price

dlapine Re:Comparitive Advantage (276 comments)

I've heard that the Merlin 1-c engines are about $1M a piece. And that SSME's run $50M each at the current production rates.

Hard to verify pricing for components, especially for SpaceX, as they do so much in house. Who outside of the company knows what the actual production costs of each part are? Hmmm, perhaps we can estimate the max possible cost of each engine based on launch prices and the assumption that SpaceX is not taking a loss on each launch.

A Falcon 9 launch costs $54M, and has 10 Merlin 1c engines. I'm going to ignore the cost differences between the upper stage (vacuum) and lower stage engines. If every thing else (fuel, lower & upper stages, facility lease, profit) were $0, each engine would cost at most $5.4M. In fact, looking at the announced pricing for Falcon Heavy, $110M max, with 27+1 engines, you're looking at less than $4M an engine, max.

Given the costs of the rest of the launch, and number of engines (production scaling efficiencies) involved, I don't think that a $1M per engine estimate is too far off. That puts engines at 25% of the launch costs, and I'm OK with that estimate. I know that the Shuttle SRB's are a higher percentage of the cost of a SLS, but those are an outlier. You can buy 4 Atlas CCB's (with 8 engines) for the price of 1 SRB. Given that pricing, I'm not sure that any $10M engine out there has 10x the thrust of a Merlin 1c.

So SpaceX is probably good with the whole multiple engine thing, at least on price.

more than 3 years ago

Univ. of Illinois Goes War-of-the-Worlds On Students

dlapine Notification System (168 comments)

I'm an alumni of the U of I, and I work here as well. I get these notifications. I thought I'd bring up 2 points:

  1. Fortunately, given the spring break, the actual number of people on campus able to read this was was quite low.
  2. Unfortunately, we just had a fire on Green street 2 days ago, and we got an alert from the same system informing us about it. So this warning was probably taken very seriously for those 12 minutes.

Overall, I'm satisfied with the system and I was impressed by the very explicit letter from the chief both explaining the error and accepting the blame for the mistake. She also detailed the upcoming efforts to address the error. I'd like to see the same level of accountability from my ISP or phone company.

more than 3 years ago

Google vs. Bing — a Quasi-Empirical Study

dlapine Re:The difference between Google and Bing is (356 comments)

Hmmm, can't say that my first attempt to use Bing gave me any Lindsay L. results, but noscript did put up a cross site scripting hijack after I attempted to "disable" a helpful toolbar with my facebook info proudly displayed.

I'm positive I don't need any search provider tapping into my facebook info- and I certainly don't want to be reminded of it on the front page! That's like, TSA scary.

Ignoring the blatant invasion of my privacy for a moment, I'm happy to say my (small sample size, insert disclaimer here) test of Google vs Bing revealed that the "best all mountain skis" works differently in Google versus Bing. Google gave a list of places to buy "the best all mountain skis" as the top listings, whereas Bing gave a set of review sites telling me which ones were the best.

Not sure how to rate one result as better than the other, they're just different. Perhaps Google feels that their users know what they want, so they just point them at it. Perhaps Bing believes that their users want to learn what is the best choice for them. Hard to put a metric on that. I'd hazard an informed guess that both search providers weigh their results according to desires of their users, as measured by click through rates. Bing users might want more hand holding, whereas Google users might want less distractions before they learn the location of something.

All that being said, I'm still not using a search engine that displays my facebook account info. Yuck. I don't care if this is Facebook's fault, I don't want to see it on a random search page as part of the interface.

about 4 years ago

Global Warming 'Undeniable,' Report Says

dlapine Re:Terraforming (1657 comments)

Terraforming is great, if you have someplace else to practice. Trying to terraform the earth with our current level of knowledge about the process and possible side effects is like doing experimental brain surgery on yourself. If we screw it up, we have no place else to go. Paraphrasing the Tick, I like the Earth, I keep all my stuff there. Let's practice terraforming on Mars, first, to get the bugs out. Until then, let's not make things worse here by accident.

My biggest gripe about this whole debate are the countless numbers of people who fail to think at all, and believe that we can ignore the mounting evidence that there even is an issue. Until they recognize the warning signs the scientists keep point out, we really can't have a debate about the issue and what to do about it. Humanocentric or not, the planet seems to be getting hotter. Perhaps all those scientists are reading things incorrectly, or drawing the wrong conclusions, but even with a chance that they are on to something ought to cause all of us to be very concerned. And not just about the gas mileage for SUV's.

more than 4 years ago

Senators Want Big Rocket Instead of New Tech, Commercial Transportation

dlapine Re:This means Direct (342 comments)

Um, I was referring to Direct, the "SSTS without the space shuttle" design, not the Ares I "Stick". I was looking at the actual design for Direct's J-130 model right here. It's a stage 1.5 design with all engines ground lit and the boosters jettisoned during flight, just like the SSTS.
I do agree with your statement about the Ares I:

I worked on Ares and know what the design is. That thing was a gigantic piece of crap just waiting to fail. Badly. From the barely stable structural dynamics of a 400ft long pencil flying at mach 6, to the ugliest, most disaster prone separation sequence; that design was doomed to fail.

But that's not what I was talking about. :)

Also, the very first class you take in Aerospace Engineering teaches you exactly why SSTO (single stage to orbit) is not as cost-effective as multiple stages. So your argument that this design is better because it doesn't need a second stage is not a good one. The design might be simpler and easier to build, but it requires so much more fuel per launch that it isn't worth it.

As my argument about "single stage", I was referring to the fact that the design already gets 77mT to orbit with just a single (OK, 1.5 stage counting the SRB's) stage and that there was room for more growth, like a second stage, if you needed more lift and were willing to pay extra for it. Did I mention the option to use 5 segment SRB's? I could go on... It's just that the J-130 is the cheapest option for a new HLV, and it leverages all the work and research that went into the SSTS program, rather than throwing it away.

That's a good thing, in my opinion.

more than 4 years ago

Senators Want Big Rocket Instead of New Tech, Commercial Transportation

dlapine Re:This means Direct (342 comments)

Per the official design from the Direct team (sorry for the pdf, that's what they have), it's 77,835kg to 30nmx100nm orbit for the regular NASA GR&A's. It's only down to 70mt if you arbitrarily factor in an additional 10% margin. Which doesn't account for their own internal 15% margin that isn't documented. I like engineers who give themselves leeway.

Short answer, yes, the 1.5 stage J-130 does 77mT to orbit per NASA rules.

more than 4 years ago

Senators Want Big Rocket Instead of New Tech, Commercial Transportation

dlapine This means Direct (342 comments)

This potential bill means congressional support behind a Direct version of a shuttle replacement or something close enough not to matter. Direct is a design to replace the space shuttle with a rocket that puts the cargo and capsule on top of the tank, and moves the shuttle engines on the bottom of the tank. Without having to lift the load of the space shuttle itself, the rocket gets 77mT of cargo to orbit.

Re-using all the major shuttle components provides the cheapest possible option for a Heavy Lift Vehicle, not to mention the quickest, as a Direct design could be flying by 2013. The current plan from the administration doesn't even decide on a HLV design until 2015, let alone start the process of building and testing it. This is not a barrel of pork. Yes, somebody will make some money, but this is the cheapest option at the moment to keep a US heavy lift capability in the near future, and it will be built here in the US.

Current US lift capability stops at only 25mT in the Shuttle cargo bay to Low Earth Orbit. By funding a Direct style vehicle, we get a minimum of 75 mT to orbit without a second stage. This a very good thing. With further development of a second stage, the payload capacity increases to 115mT+. Not only that, but by putting the payload on top of the vehicle, a direct style rocket can support a payload as wide as 12m across (shuttle can only do 5m). So we get the ability to send more per launch and save over the life of a large project. For example, five flights of Direct would have been sufficient to build the ISS, versus the 40 shuttle launches it actually took.

By re-using the same engines and boosters as the space shuttle, we save billions (maybe $10 billion over time) in research and launch facility changes necessary for other designs (Ares would have required 2 new pad designs and new crawlers at a $1 billion a pop). The cost per launch for Direct will be less expensive as well. For comparison, recovery of the shuttle SRB's, refurbishment of the shuttle and launch costs per launch have averaged out to about $1.3 billion per launch. A Direct will cost somewhere north of $200 million for the launch vehicle, plus operating costs, but won't include refurbishment or recovery operations. For the immediate future NASA says it will launch the last shuttle in 2011, and after we'll be paying the Russians $20-30 million per seat for rides in a Soyuz

We save time in that we can have an un-manned cargo version of the vehicle doing test flights by 2013, whereas the engine testing alone for a liquid-fueled booster would take 5 years by the current plan. as all the parts are already man-rated (save for the modified ET), we could be launching Orion capsules on a Direct as soon as the Orions finish development in 2015 or so.

If this passes, I'll be one very happy space fan.

more than 4 years ago

SeaMicro Unveils 512 Atom-Based Server

dlapine Re:System Specs (183 comments)

Hmmm, didn't see that. Given that it takes the place of two full racks, maybe you're supposed to put it on a pedestal in place of them.

Something like this for easy access.

Or, maybe you could rotate it 90 degrees and mount it CPU-access-side up. At 10U that's only 15", so it should fit in a 19" rack. :-)

Seriously, if you don't plan to do hot swap on the CPU boards, you'd be OK in a normal rack. I'm not sure I'd trust hot swap for CPU boards anyways.

more than 4 years ago

SeaMicro Unveils 512 Atom-Based Server

dlapine System Specs (183 comments)

This is a good start- SM10000 System Overview

Interconnect is 1.28 Tbps or 2.5 Gbps per core.

I/O includes a minimum or 8 gige or 2 10-gige, which can be increased to 64 gige or 16 10-gige links per chassis.

This unit runs as 512 system images using stock 32 bit OS's. Each CPU may have 1 or 2 GB's of ram and up to 64 local drives may be installed and divided among the CPU's with the included management software. The unit supports PXE boot, so the system images may run off local disk or from a ram image.

Just to note, the Atom z530 is a single core, 32 bit only CPU, if that matters.

I couldn't tell you if the 16 10-gige links would seriously limit this box or not. You'd have to show me a data center with more than 160 Gbps of internet connectivity first. :) And that's assuming you only purchased one of these suckers, because you'd need that much per chassis.

more than 4 years ago

Time To Dump XP?

dlapine Possible Harware reasons to upgrade to Win 7 (1213 comments)

Here are some possible reasons to consider the upgrade to win 7 from a hardware perspective...

  • support for more than 3.2GB of ram with 64 bit with Win 7
  • native USB 3 support planned
  • native support for 4k sections on new drives, which is needed for drives larger than 2TB
  • better support for multiple cpus, especially as number of cpu cores goes past 4
  • native support for SSD's (TRIM, 4k offset, etc)
  • Win 7 install via usb and no drivers on floppy (or need to slipstream)

Do these mean that Win 7 is a no-brainer for businesses? Probably not, as most of these hardware issues aren't relevant for all those old systems.
New purchases however, would definitely merit a look. Give it a year, and Win 7 becomes much more obvious.

more than 4 years ago

Intel Targets AMD With Affordable Unlocked CPUs

dlapine Meh (207 comments)

The article comparing values uses the highest price motherboard available for AMD for a "midrange" system, then claims that the Intel-based total system is a value. If you spend $350 on a 6-core processor, then spending $140 on a high-end motherboard is reasonable. If you're spending $99 for a low end AMD quad, you're probably in the market for more reasonably priced motherboard (~$100) to go with it. The comparison is valid for the high-end AMD cpus, but not their budget stuff, as a $40 drop in price is a big deal for a system with a $100 cpu.

That being said, being able to overclock this thing is directly aimed at the enthusiast market. "I got 6 cores, w00t!" "Yeah, well I'm at 4GHZ on a quad, so there!" It definitely improves the competition between the high end AMD hexa-cores and the midrange Intel quads, and makes the Intel option more appealing to the enthusisast.

more than 4 years ago



Orwell removed from Kindle

dlapine dlapine writes  |  more than 5 years ago

dlapine writes "What's the difference between an ebook and the dead-tree edition? Seems that the dead-tree versions are harder for the publisher or middleman to recall. On July 17th, Amazon bowed to pressure from the publisher and electronically deleted versions of George Orwell's books, remotely and without warning to their customers. A refund was provided, but the irony involved in silently pulling 1984 from the electronic bookshelves of Kindle users is immense. "Congratulations! Your account has been increased to 0 books by George Orwell."

Still want to buy that Kindle?

More information here: some-e-books-are-more-equal-than-others"

dlapine dlapine writes  |  more than 8 years ago

dlapine writes "As of midnight, 1.5 million subscribers to Mediacom cable tv will lose access to 24 local TV stations owned by Sinclair Broadcast Group, due to ongoing dispute between the cable company and the station operator over access fees. SBG stations no longer available through Mediacom include NBC, CBS, FOX, ABC, CW & MNT. The stations affected all offer free broadcast services, and will continue to do so- it's just that the SBG has denied Mediacom, and only Mediacom, the continued use of their channels for cable delivery.

This is potentially interesting to us slashdotter's, beyond the obvious, "I see can't Deperate Housewives" ranting and "Honey, where'd I put the rabbit ears?" search. This is the first time that a large cable network will be denied use of broadcast stations from a large supplier for monetary reasons. What does this imply for Internet TV? There's also the other issue that SBG can be considered to have a conservative agenda, from it's current and past actions. Does the loss of 1.5 million viewers on a conservative conglomerate imply that the media is once again liberal?"


dlapine has no journal entries.

Slashdot Login

Need an Account?

Forgot your password?