Beta

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Comments

top

Jesse Jackson: Tech Diversity Is Next Civil Rights Step

Theovon The root of the problem is culture & social cl (424 comments)

For some reason, Americans have developed a stereotype of "white" and "black" that is related far more to social class than anything else. When you say "white," we imagine someone from the middle class. When you say "black," we imagine someone from lower socioeconomic status. How many blacks are in the middle class, I'm not sure, but as for whites in lower classes, we have them coming out our ears. While we may have millions of blacks who live in ghettos, we have 10 times as many whites living in trailor parks.

Because of our confusion between ethnicity and social class, we end up with things like Dave Chappelle's "Racial Draft": http://www.thecomedynetwork.ca/blogs/2013/06/chappelles-show-june5-racial-draft
While amusing, it highlights the real problem, and this false stereotype is widespread throughout American culture.

I recall an interview with Bill Cosby, talking about educational advancement among black children. Peers discourage each other from studying because it's "acting white." When in fact it is "acting middle class," because this same kind of discouragement occurs among lower class whites as well. As long as education is not valued within any group, that group will have difficulty being equally represented in white collar industries.

What we have to work out to explain the disparity between population demographics and white collar job demographics is the proportions of the underrepresented groups who discourage education. People like Jesse Jackson want to make this all out to be the result of prejudice on the basis of genetics or skin color. Honestly, I think we're long past that. There are still plenty of racist bastards out there, but in general, we do not have pink people acting intentionally or unconsciously to undermine the advancement of brown people when it comes to getting college degrees.

It's not PC to talk about genetic differences, but genetics is interesting. Geneticists have identified differences between different ethnic groups, and they have correlated them with some minor differences in physical and cognitive adaptations. Things like muscle tone, susceptibility to certain diseases, social ability, and other things have been correlated to a limited degree with variation in human DNA. But the average differences between genetic groups are miniscule compared to their overlap (statistically, we have very small mu / sigma for basically any meaningful measurable characteristic).

Thus I can only conclude that correcting any disparities must come from within. Regulating businesses won't do any good, because unqualified minorities will end up getting unfairly hired and promoted. We have to start with the children and get them to develop an interest in science and math. If Jesse Jackson wants to fix this problem, he need to learn science and math and start teaching it. I assure you, even at his age, he has that capability, if he just cared enough to do it. Unfortunately for him, if he were to corrupt himself with this knowledge, he would find himself taking a wholly different approach than the "we're victims" schtick he's played most of his life. Personally, I prefer the "the universe is awesome" philosophy held by Neil deGrasse Tyson. He's one of my biggest heroes, having nothing to do with his skin tone.

One last though: I'm sure someone will find something racist in what I have said. Either that or I'm being too anti-racist and appear like I'm overcompensating. There are also aspects of these social issues I know nothing about. I'm just writing a comment on Slashdot that is about as well-informed as any other comment. One thing people should think about in general is whether or not they have hidden prejudices. It's not their fault, having been brought up in a culture that takes certain thing for granted. Instead of burying our heads in the sand, we should be willing to admit that we probably do have subconscious prejudices. That's okay, as long as we consciously behave in a way that is fair to other human beings, regardless of race, gender, sexual orientation, autism, or any other thing they didn't choose to be born with (and plenty of things they have chosen, because it's people's right to choose).

yesterday
top

German NSA Committee May Turn To Typewriters To Stop Leaks

Theovon Listening to keystrokes + HMM = Profit! (244 comments)

Passwords have been stolen just by listening to keyboard click noises. Why could a typewriter be any different? A relatively straightforward codebook analysis of keypress noises plus a hidden markov model plus a Viterbi algorithm will allow you calculate the highest probability sequence of letters for a given sequence of sounds and timings between sounds even in German!

Mind you, they have to be able to get a sound bug in there, but that might be malware-infected computers nearby the typewriters.

Anyhow, basically, the technology used to do automatic speech recognition would make short work of tapping typewriters, so they’re fooling themselves if they think this’ll make much difference.

BTW, I have a strong suspicion that the Germans’ outrage is all a big charade. Every major country has big spy operations. The NSA is neither unique nor the first of its kind. The Germans could not have been ignorant of at least the general nature NSA’s dealings before Snowden, so while they openly object, secretly, this is business as usual. By doing this, they fool their people into thinking they’re not being spied on by their own government and, using the US as a scapegoat, they also generate a degree of solidarity. Russians spy operations, of course, are way worse, so their objections are the same bullshit. And the Chinese government is all about lying to, well, basically everyone while they use both capitalism and cyberwarfare to take over the world and control everyone, so their recent statement about the iPhone is also a crock of shit.

This reminds me of Andrew Cuomo’s push to restore trust in government. The whole idea is disingenuous. Governments, like any large organization, are only going to do what the people need only with checks & balances and transparency.

And as a final note, I believe that the stated purpose of the NSA is a good one: Mine publically available data to identify terrorist activity. That sounds like a good thing to do. It’s the illegal violations of privacy that are wrong. They violate our rights because it’s inconvenient to get the info they need some other way. It’s also inconvenient for me to work a regular job instead of selling drugs. There are much more convenient ways to achieve my goals that I avoid because they are wrong. To do their job, the NSA needs to find clever ways to acquire the information they need WITHIN THE LAW.

about two weeks ago
top

Economist: File Sharing's Impact On Movies Is Modest At Most

Theovon Anti-piracy campaigns are highly effective (214 comments)

But not for the reason you think.

A question we should be asking ourselves is what impact would piracy have on movie revenue if we’d had higher speed Internet in the days of Napster and Kazaa. We currently live in a culture where even non-technical people know that piracy is a copyright violation. There’s also the looming threat of being sucked up in a dredging operation or having your ISP (or the NSA) volunteer information to the MPAA on your metadata. People don’t avoid filesharing because it’s unethical or illegal. They avoid it because it’s relatively inconvenient (requiring technical knowledge), and they fear excessive penalties if they’re caught.

If pirating movies were as simple as downloaing an app and searching people’s libraries, the amount of piracy would be far greater, and the impact on revenue would more significant.

What’s really curious to me, however, is the amount of time and effort some people spend on this. Personally, I’d rather optimize to reduce how much time I spend on it than try to see how cleverly rebellious can be. If I want to watch a movie that’s currently out on DVD, I have four classes of options:
(1) I could spend about half an hour figuring out which of the numerous available torrents is in a playable format and not a fake and then maybe a couple hours downloading it. If I’m really lucky, I can burn it to a DVD that my player will understand so I don’t have to take the time to connect my laptop to the TV.
(2) I could run down to the nearest RedBox, about 15 minutes round trip, and spend the rest of the time doing some consulting work. Not only would I have a legal copy, but I’d come out ahead financially.
(3) If I have some patience to wait a day, I can order my own copy to keep from Amazon Prime, and I’ll STILL come out ahead financially.
(4) If I’m dead-set on a lengthy download, services like iTunes offer up a wide variety of downloadable media.

I suspect most of us clever enough to avoid getting caught pirating think this way. The legal options are just easier, less costly (time==money), and less risky. Those with the skills already in a minority, so the only people doing any significant amount of piracy are those with both the skillls and nothing better to do than to see how clever they can be at unnecessarily breaking the law.

I encounter that attitude a surprising amount, though, among students. There are people who will spend more time and effort trying to BS their way through an assignment and/or find a way to avoiding the need to do it than would be necessary to actually just do the assignment. Doing the assignment requires learning something new, while all this “clever" avoidance relies on established skills. But I don’t know why these people bothered to go to college if they have no interest in learning the material. I guess they feel pressure culturally or parentally, but I don’t like it when they make it my problem.

about two weeks ago
top

Normal Humans Effectively Excluded From Developing Software

Theovon Elites in any field must have some OCD (608 comments)

Those people really far out on the cutting edge of new sciences are successful only because they have some major obsessive qualities. They are driven to learn, understand, and create. They understand things so abstract and esoteric that it would be all but impossible to explain some of these ideas to the lay person. And each of us has some secret weapon too. Mine, for instance is that I can code rings around most other CS professors. I’m not actually smarter than them. Indeed, most of them seem to be better at coming up with better ideas on the first shot. My advantage is that I can filter out more bad ideas faster.

A key important aspect of the areas that we are experts in is that there are underlying and unifying principles. Subatomic particles fit into categories and have expected behaviors that fit a well-tested model. CPU architectures, even exotic ones, share fundamentals of data flow and computation. CS is one of those fields that in invented more than it’s discovered, and as we go along, scientists develop progressively more coherent approaches to solving problems. Out-of-order instruction scheduling algorithms of the past (Tomasulo’s and CCD6600, for instance) have given way to more elegant approaches that solve multiple problems using common mechanisms (e.g. register renaming and reorder buffers). You may think the x86 ISA is a mess, but that’s just a façade over a RISC-like back-end that is finely tuned based on the latest breakthroughs in computer architecture.

Then there’s web programming. Web programming is nothing short of a disaster. There’s a million and one incompatible Javascript toolkits. HTML, CSS, and Javascript are designed by committee so they have gaps and inconsistencies. To write the simplest web site (with any kind of interactivity) requires that one work with 5 different languages at one time (HTML, CSS, Javascript, SQL, and at least one back-end language like PHP), and they’re not even separable; one type of code gets embedded in the other. People develop toolkits to try to hide some of these complexities, but few approach feature-completeness, and it’s basically impossible to combine some of them. Then there’s security. In web programming, the straightforward, intuitive approach is always the wrong approach because it’s full of holes. This is because these tools were not originally developed with security in mind, so you have to jump through hoops to make sure to manually plug them all with a ton of extraneous code. In terms of lines of code, your actual business logic will be small in comparison to all the other cruft you need to make things work properly.

When I work on a hard problem, I strip away the side issues and focus on the core fundamental problem that needs to be solved. With most software, it is possible to break systems down into core components that solve coherent problems well and then combine them into a bigger system. This is not the case with web programming. And this is what makes it inaccessible to “normal people.” The elites in software engineering are the sorts of people who extra grand unifying theories behind problems, solve the esoteric problems, and provide the solutions as tools to other people. Then “normal people” use those tools to get work done. With the current state of the web, this is basically impossible.

about three weeks ago
top

By 2045 'The Top Species Will No Longer Be Humans,' and That Could Be a Problem

Theovon Who are these idiot futurists? (564 comments)

MAYBE machine intelligence will surpass humans in some ways, but where the hell do we get this idea that they’ll decide we’re unstable and wipe us out? Sci Fi? Do we get it from anything RATIONAL?

We humans have our emotions from millions of years of evolution in hostile environments on earth. And really, emotions are just low-level intelligence adaptations for detecting and avoiding threats. They’re somewhat vestigial in humans, due to layers of more advanced intelligence on top of earlier developments. With intelligent computers, we’re bypassing all of that and just giving them basic reasoning capabilities over huge data sets. For AI, the evolutionary mutations (i.e. advances in AI algorithms [*]) and selective pressures (which AI algorithms we choose to deploy) are COMPLETELY DIFFERENT from what our ancestors faced.

Computers will not spontaneously develop either intelligence or the kind of moronic reasoning necessary to decide to wipe humans out. To get the latter, we’d need a massive conspiracy of megalomaniac genius experts in artifcial intelligence who intentionally develop malware to infect military robots that go around shooting people. Oh, and we’d need the robots too.

Some people forget that politics (or aspects of it) and paranoia move as fast as technology. Every time some scientific advance occurs, a bunch of ethics people (some sensible, some not so much) pounce on it and pick it to pieces. IBM Watson isn’t capable of the kind of decision making that would obsolete humans, but there are plenty of people who are worried about it and ready to develop all manner of reasonable and assinine regulations.

Bottom line: Intentionally developing or accidentally evolving destructive AIs like this is highly implausble, due to lack of motivation and lack of evolutionary pressure, and those evolutionary pressures that exist are counter to this kind of development as well.

[*] Implementations of AI programs may be done intentionally by humans, but advances in algorithms evolve as memes. Evolutionary steps may often seem intentional, but quite often, they’re the result of a arbitrary combinations of pre-existing ideas in people’s minds, where the cleverness exists mostly in figuring out that these ideas can go together and finding a way to combine them. Technology evolves in the same way that languages do.

about three weeks ago
top

Facebook Fallout, Facts and Frenzy

Theovon Re:The good Samaritan always gets his ass kicked (160 comments)

Yes and no. They were testing something that they HYPOTHESIZED could reduce the quality of the user experience. And IIRC, that hypothesis turned out to be wrong (to the extent that one can get that from the statistics).

If all user interface modifications that lead to an improved user experience were intuitive, then Facebook would have implemented them already. They are now at a point where they have to consider things that are NOT intuitive. The idea that filtering other people’s posts in a way that increases their negativity should actually lead to an improvement in user experience is not intuitive. Moreover, their explanation for the result (that people are turned off by their friends doing too well) is conjecture, albeit a reasonable one. Human psychology is complex, and for Facebook to continue to advance their mind job on people, they have to get really clever and do things that aren’t obvious.

Like I say, I feel it was a mistake to not get human subject approval before conducting this research. If this was an analysis of pre-existing data, they wouldn’t need approval. If Facebook had done this unilaterally, they souldn’t need approval. But in the specific circulstances, the law is somewhat ambiguous on this point.

You’ll notice that I referred to Facebook as a mind job. I really don’t like it. I have an account, but I seldom use it. However, that doesn’t mean I can’t try to be objective about this research experiment.

about a month ago
top

Facebook Fallout, Facts and Frenzy

Theovon Re:The good Samaritan always gets his ass kicked (160 comments)

The requirement for informed consent was ambiguous in this case. If I had been in their position, I would have erred on the side of caution, and the research faculty who consulted on this project should have been more resolute about it. If anything, it is those people who should have done the paperwork. I think their failure to get informed consent was a mistake, but I don’t believe it was any kind of major ethical violation. It does no harm to get informed consent, even if you don’t legally need it, and there are moral arguments for getting it in any case.

My main point is that this kind of “manipulation” has been going on for a long time and will continue to occur. Facebook intentionally manipulates users in all sorts of ways to determine what will affect what gets users to use their service and click ads. The only practical difference between this current intentional manipulation and past intentional manipulation is that this time, they reported on it. Going forward, they continue to not get informed consent (because they don’t need it), but they will also continue to manipulate. Thus the travesty is that will simply stop reporting their findings in the future, and that is the ONLY thing that will change, and the rest of the world will be less informed because of it.

about a month ago
top

Facebook Fallout, Facts and Frenzy

Theovon The good Samaritan always gets his ass kicked (160 comments)

As has been pointed out many times, Facebook was doing their usual sort of product testing. They actively optimize the user experience to keep people using their product (and, more importantly, clicking ads). The only difference between this time and all the other times was that they published their results. This was a good thing, because it introduced new and interesting scientific knowledge.

Because of this debacle, Facebook (and just about every other company) will never again make the mistake of sharing new knowledge with the scientific community. This is truly a dark day for science.

Ferengi rule of aquisition #285: No good deed ever goes unpunished.

about a month ago
top

Will 7nm and 5nm CPU Process Tech Really Happen?

Theovon Re:CMOS scaling limited by process variation (142 comments)

I don’t know the principles behind how doping concentrations are chosen, but I’m sure it’s optimized for speed. Also, you can compensate for Vth variaton using body bias, but it’s basically impossible to do this per-transistor. You can do it for large blocks of transistors, which allows you to compensate a bit for systematic variation (due mostly to optical aberrations in lithography), but there’s nothing you can do about random variation. Also, there’s effective length variation, which I don’t think you can compensate for using body bias.

about a month ago
top

Will 7nm and 5nm CPU Process Tech Really Happen?

Theovon Re:CMOS scaling limited by process variation (142 comments)

What you’re talking about is “approximate computing,” which is a hot area in research right now. If you can tolerate some errors, then you can get a massive increase in performance.

about a month ago
top

Germany Scores First: Ends Verizon Contract Over NSA Concerns

Theovon Better start prepping (206 comments)

So, first other countries start dropping the dollar as the international reserve currency. Now they’re going to stop buying our products and services. Our economy is going to hell in a handbasket.

about a month ago
top

Will 7nm and 5nm CPU Process Tech Really Happen?

Theovon CMOS scaling limited by process variation (142 comments)

There are a number of factors that affect the value of technology scaling. One major one is the increase in power density due to the end of supply and threshold voltage scaling. But one factor that some people miss is process variation (random dopant fluctuation, gate length and wire width variability, etc.).

Using some data from ITRS and some of my own extrapoliations from historical data, I tried to work out when process variation alone would make further scaling ineffective. Basically, when you scale down, you get a speed and power advantage (per gate), but process variation makes circuit delay less predictable, so we have to add a guard band. At what point will the decrease in average delay become equal to the increase in guard band?

It turns out to be at exactly 5nm. The “disappointing” aspect of this (for me) is that 5nm was already believed to be the end of CMOS scaling before I did the calculation. :)

Incidentally, if you multiply out the guard bands already applied for process variation, supply voltage variation, aging, and temperature variation, we find that for an Ivy Bridge processor, about 70% of the energy going in is “wasted” on guard bands. In other words, if we could eliminate those safety margins, the processor would use 1/3.5 as much energy for the same performance or run 2.5 times faster in the same power envelope. Of course, we can’t eliminate all of them, but some factors, like temperature, change so slowly that you can shrink the safety margin by making it dynamic.

about a month ago
top

The Game Theory of Life

Theovon Re:Is it me or... (85 comments)

The simulated universe hypothesis is based on the seeming odd coincidence that our universe’s operation looks identical to information theory.

The problem with that hypothesis is that people seem to forget that our concept of information theory is a function of the universe it was developed in. Thus, it’s no coincidence, and the congruence of physics and information theory is not evidence of simulation.

about a month and a half ago
top

Yahoo's Diversity Record Is Almost As Bad As Google's

Theovon Diversity of applicants? (435 comments)

What I want to know is, what kind of applicant pool do these companies have. If their hiring diversity is the same as their applicant pool, there’s not all that much they can do except maybe try harder to recruit in communities with higher proportions of minorities. If the minority applicants that they get aren’t as well qualified (objectively), we shouldn’t encourage them to hire less qualified applicants. Anything else would be reverse discrimination, which would also be wrong.

Maintaining higher diversity avoids a monoculture and increases the diversity of thought, which is good for problem solving. But you can’t squeeze blood from a stone. (Well, except for blood stones.)

about a month and a half ago
top

NSF Researcher Suspended For Mining Bitcoin

Theovon Re:Compartmentalization and ethics (220 comments)

They can be compared in that there are ethical considerations in both cases. As I said, abusing the supercomputer is a much more extreme case. In many ways, my examples are victimless crimes, while the supercomputer case had a far more tangible impact. In a relative moral scale, the supercomuting case was much more severe and would therefore have a more severe penalty. My whole point, I guess, is that even victimless crimes are cases where an ethical person should think twice before taking action.

I do feel compelled to point out that “victimless crime” is a loaded term that is abused by some people who want to poke their noses where they don’t belong. Some people would, for instance, want to say that growing your own weed and smoking it is a “victimless crime.” I’m not sure what the current laws are, but since this doesn’t involve interstate commerce, it’s not a crime at all, and it might be called a crime in the first place only because someone objects in general to smoking pot. Even if it were a crime, technically, I think the ethics in this case depend on the broader impact. If smoking pot improves your over-all wellbeing and doesn’t negatively impact your functioning in society, then it’s probably a good thing. If you’re neglecting your kids because of it, then it’s wrong. The ethical failure, however, is not in smoking pot but in failing to moderate the impact of your choices — it’s just as bad to neglect your kids playing video games online.

In the case of abusing equipment that was bought by tax-payer money, even if it’s “victimless,” it’s still unethical because you’re acting beyond your rights with respect to this asset that you have been trusted wtih. In other words, the ethical failure is not in the use of the equipment, per se, but rather in a breach of trust with respect to how you’re expected to use it. It’s one thing to borrow the company truck to go grab lunch. It’s entirely another to borrow company A’s truck to go do consulting work for company B, even if you’re not on company A’s clock at the time and you use your own money to fill the gas tank.

about 2 months ago
top

NSF Researcher Suspended For Mining Bitcoin

Theovon Compartmentalization and ethics (220 comments)

The abuse of the supercomputer is an extreme case. But there are other less clear-cut areas. For instance:

- What if I bring my own computer to the university and use their electricity to generate bitcoins?
- What if I bring university-owned equipment (that I have control over) home and use it to mine bitcoins on my electricity?

In either case, something that doesn’t really belong to me (even if I’m in charge of it and have the right to relocate) is being used for profit in a way that is (a) most likely against policy, and (b) not ethical in the first place.

The latter category is the really tempting one. Nobody would catch me, because all the network traffic and electricty usage is at my own home. Any impact on the longevity of the equiment is moot because it would probably go obsolete long before it suffered hardware failure. And of course, I can claim that I’m taking it home for official purposes (nobody would question me anyhow). This is one of those cases where you have to let your sense of right and wrong take precedence over the fact that you're clever enoug to not get caught.

about 2 months ago
top

R Throwdown Challenge

Theovon Fortran throwdown challenge! (185 comments)

This guy must have been reading the recent stuff on Fortran and decided to jump on the bandwagon.

Fortran was written by engineers and scientists for engineers and scientists.
R is written by statisticians for statisticians.

Well, there you have it. If a language or other kind of tool was developed by practitioners of X for other practitioners of X, it’s likely that it will be better than some other tool that was designed for a different purpose.

Who would have thunk it.

about 2 months ago
top

Understanding an AI's Timescale

Theovon Computers don’t THINK yet (189 comments)

Computers do MATH (and data movement, etc.) really really fast compared to humans. But then again neurons do all sorts of low-level operations really fast too compared to the timescale we tend to think in at a high-level. What we don’t have are algorithms that are both fast and accurate for things like vision and speech recognition, MUCH LESS some form of cognition. (Yes, automatic speech recognition and computer vision are very complex and capable, but they pale in comparison to what humans can do. Imagine a computer trying to make sense of this image: http://accidentalblogger.typepad.com/.a/6a00d8341c575d53ef0163061e18fa970d-pi). Despite what Ray Kurtzweil wants to imply, having the compute power equvalent to the human brain will not magically cause computers to become conscious. We don’t know how to do that, and without that knowledge, we can’t write the necessary code.

So this timescale thing is bullshit. IBM Watson is amazing, but it doesn’t really think a lot faster than a human, in practical terms, and it isn’t exactly something you can have a philosophical conversation with. (BTW, on a completely coincidental note, I work for the Thomas J. Watson school of engineering.)

about 2 months ago
top

AMD Preparing To Give Intel a Run For Its Money

Theovon Re:Just like Bulldozer? (345 comments)

I've seen a number of papers that squeeze more performance out of x86 processors by being micro-op aware. Some just more carefully choose among instruction sequences, given awareness of the micro-ops that will be generated, while others consider what-if scenarios over what more performance you could get if you could have more control over the micro-ops themselves. For instance, given an ISA sequence and the corresponding micro-op sequence, is there a functionally equivalent alternative micro-op sequence? To some degree, compiler writers have to reverse-engineer the micro-op generation in order to generate micro-op sequences that schedule better. Even Intel compiler writers do this; although they have access to exactly what micro-ops are generated it's still not trivial to get the optimal micro-op sequence.

about 2 months ago
top

AMD Preparing To Give Intel a Run For Its Money

Theovon Re:Just like Bulldozer? (345 comments)

Actually, a Microsoft engineer developed the 64-bit extension, and AMD adopted it.

about 2 months ago

Submissions

top

Ask Slashdot: Any really good texts on evolutionary details?

Theovon Theovon writes  |  about 2 years ago

Theovon (109752) writes "To me, that we evolved from earlier life forms is a straightforward conclusion. We have mountains of evidence, and current theories are sound given that data. But I'm not a biologist, so I find it a challenge to get access to much of that data. I'm looking for a single coherent tome (or maybe multivolume set) of biological data used to develop specific theories of evolution of many ancient and modern family trees. I don't want mere drawings of fossils in sequence like in a high school textbook. I want to see photographs of the original fossils, along with data about geologic strata, measurements of numerous morphological features, and explanations of the lines of reasoning that lead to particular conclusions. Sections on DNA analysis would be great too, along with any other interesting lines of evidence. The conclusions that scientists draw are fascinating, and I'd like to dig deeper into the data they started from. Would the slashdot crowd be able to help me find a top example of such a resource?"
top

'Something is deeply broken in OS X memory management'

Theovon Theovon writes  |  more than 2 years ago

Theovon (109752) writes "Ever since Apple released Lion, countless users have been complaining about performance problems, even on top-of-the-line hardware. OSX point releases have been coming out, but this issue has remained completely unaddressed by Apple, as per their usual "it's not our fault" approach to their mistakes. Well, some researchers have been investigating this. Perhaps Apple will finally take notice. The original article is here, and the OSNews article is here."
Link to Original Source
top

Pampers Dry Max diapers, chemical burns

Theovon Theovon writes  |  more than 4 years ago

Theovon (109752) writes "Despite the self-deprecating jokes, many of us slashdotters do indeed have the social skills to find mates and have children. This is why articles like the recent one on delayed umbilical cord cutting are of interest to us. Well, here's another one for us parents, something my week-old daughter is experiencing first-hand. Procter and Gamble is putting their heads in the sand and denying all responsibility in response to a spate of reports that the most recent version of their "Dry Max" diapers are causing severe rashes that appear to be chemical burns. There are articles all over the place, with questions and blogs and even P&G's lame response trying to suggest that it's a mere coincidence that rashes are increasing at the same time that their new diapers came out. The feds are investigating, and hopefully, there will be a recall soon. My little girl's rash isn't quite as severe as what I've been reading about, since we caught it early and are using liberal amounts of Desitin. We're accustomed to seeing corporate greed stand in the way of product quality, every one of us who is forced to use Microsoft products. But it's one thing to lose some work. It's entirely another when helpless babies are physically injured by a product that we're supposed to trust, and even worse when the manufacturer tries to tell us that we're the ones at fault."
top

Linux Not Quite Ready for New 4K-sector Drives

Theovon Theovon writes  |  more than 4 years ago

Theovon (109752) writes "We've seen a few stories recently about the new Western Digital Green drives, including this one on slashdot. According to WD, their new 4096-byte sector drives are problematic for Windows XP users but not Linux or most other OS's. There's an article on OS News that suggests that Linux users should not be complacent about this, because not all the Linux tools like fdisk have caught up. The result is a reduction in write throughput by a factor of 3.3 across the board (a 230% overhead) when 4096-byte clusters are misaligned to 4096-byte physical sectors by one or more 512-byte logical sectors. The author does some benchmarks to demonstrate this. Also, from the comments on the article, it appears that even parted is not ready since by default, it aligns to "cylinder" boundaries, which are not physical cylinder boundaries and are multiples of 63."
Link to Original Source
top

OGP releases video of VGA emulator booting

Theovon Theovon writes  |  more than 5 years ago

Theovon writes "Slashdot hasn't seen much news about the Open Graphics Project for some time now, but the OGP has been quite busy, especially recently. As you may recall, the OGP's goal is to develop a fully open-source graphics card. All specs, designs, and source code are released under Free licenses. Right now, they're using FPGAs (large-scale reprogrammable chips) to build a development platform that they call OGD1. And they've just completed an alpha version of legacy VGA emulation, apparently not an easy feat. They have posted a Youtube video of OGD1 driving a monitor, showing Gentoo booting up in a PC. This completes a major step, allowing OGD1 to act as the primary display in an x86 PC. The announcement can be seen on the OGP home page, and there's an OSNews.com article. Finally, the Free Software Foundation has taken notice of this and is asking for volunteers to help with the OGP wiki."
Link to Original Source
top

Dedicated compute box: Persistent terminals?

Theovon Theovon writes  |  more than 6 years ago

Theovon (109752) writes "I just built an expensive high-end quad-core Linux PC, dedicated for number-crunching. Its job is to sit in the corner with no keyboard, mouse, or monitor and do nothing but compute (genetic algorithms, neural nets, and other research). My issue is that I would like to have something like persistent terminal sessions.

I've considered using Xvnc in a completely headless configuration (some useful documentation here, here, here, and here). However, for most of my uses, this is overkill. Total waste of memory and compute time. However, if I decided to run FPGA synthesis software under WINE, this will become necessary. Unfortunately, I can't quite figure out how to get persistent X11 session where I'm automatically logged in (or can stay logged in), while maintaining enough security that I don't mind opening the VNC port on my firewall (with a changed port number, of course). I'm also going to check out Xpra, but I've only just heard about it and have no idea how to use it.

For the short term, the main need is just terminals. I'd like to be able to connect and see how something is going. One option is to just run things with nohup and then login and "tail -f" to watch the log file. I've also heard of screen, but I'm also unfamiliar with that.

Have other slashdot users encountered this situation? What did you use? What's hard, what's easy, and what works well?"
top

Theovon Theovon writes  |  more than 7 years ago

Theovon (109752) writes "It's only been two days since the announcement of the official release of Ubuntu 6.10 (Edgy Eft), and the fallout has been very interesting to watch. By and large, fresh installs of Edgy tend to go well. A few problems here and there, especially with installation of closed-source ATI and nVidia drivers, but for the most part things have been smooth. Many people report improved performance over Dapper, improved stability, better device support, etc. A good showing. But what I find really interesting is the debacle that it has been for people who wanted to do an "upgrade" from Dapper (6.06). Installing OS upgrades has historically been fraught with problems, but previous Ubuntu releases, many other Linux distros, and MacOS X have done surprisingly well in the recent past. But not Edgy. Reports are flooding into Ubuntu's Installation & Upgrades forum from people having myriad problems with their upgrades. One user described it as a nightmare. Users are producing detailed descriptions of problems but getting little help. This thread has mixed reports and is possibly the most interesting read. Many people report that straight-forward upgrades of relatively mundane systems go well, but anything the least bit interesting seems not to have been accounted for, like software RAID, custom kernels, and Opera. Even the official upgrade method doesn't work for everyone, including crashes of the upgrade tool in the middle of installing, leaving systems unbootable, no longer recognizing devices (like the console keyboard!), reduced performance, X server crashes, wireless networking problems, the user password no longer working, numerous broken applications, and many even stranger things. Some of this is fairly subjective, with Kubuntu being a bit more problematic than Ubuntu, with reports that Xubuntu seems to have the worst problems, and remote upgrades are something you don't even want to try. Failed upgrades invariably require a complete reinstall. The conclusion from the street, about upgrading to Edgy, is a warning: If you're going to try to take the plunge, be sure to make a backup image of your boot partition before starting the upgrade. Your chances of having the upgrade be a total failure are high. If you're really dead-set on upgrading, you'll save yourself a lot of time and headache by backing up all of your personal files manually and doing a fresh install (don't forget to save your bookmarks!)."
top

Theovon Theovon writes  |  more than 7 years ago

Theovon writes "Back in the 1920's a blight all but completely wiped out Chestnut trees in the United States. As such, my uncle's 1100-tree chestnut farm is a rare sight indeed. From the article, "... someone found a single tree in Ohio that the blight did not kill ..., crossed it with the Chinese chestnut, resulting in a nut with the characteristics of the Chinese variety but with the larger nut of the American tree." The article goes on to describe some interesting things about chestnuts themselves, such as the spiny burr that they grow in on the tree."

Journals

Theovon has no journal entries.

Slashdot Login

Need an Account?

Forgot your password?
or Connect with...

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>