×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Comments

top

Snowden Queries Putin On Live TV Regarding Russian Internet Surveillance

williamhb Re:His question was important and legitimate (391 comments)

It is a perfectly valid question which needs to be asked to all world leaders. While Putin's answer can certainly be seen as pure political spin, the question itself is a legitimate and forceful question to be posed. And by asking it, it forced Putin to provide an answer through which he can be measured against.

Opinions of Snowden aside (I don't think he should be expected to dedicate his every action for the rest of his live to a privacy campaign just because he once blew a whistle), I think we can presume that the question was only asked because Putin and his aides wanted it to be. They had a prepared answer, information about Russia's eavesdropping is not in the public domain the way the west's is, and this let them thumb a nose at the US at a time when each is trying to portray the other as the bad guy over Ukraine.

2 days ago
top

Snowden Queries Putin On Live TV Regarding Russian Internet Surveillance

williamhb Re:Useful Idiot (391 comments)

These propaganda sessions for Putin are pre-staged so Snowden has allowed himself to be used as a "propaganda tool". Considering how freedoms are curtailed in Russia, it seriously deminishes Snowden's reputation.

Snowden doesn't trade on his reputation -- his whistleblowing was a release of the government's own documents, and did not rely on his reputation at all (indeed the public hadn't even heard of him before he released the documents). He's not a career campaigner, just someone who had been working in the business of eavesdropping on all of us and decided that it had gone too far. That he's now effectively in exile is a cost he clearly decided was worth paying, but that in itself doesn't mean that his every action for the rest of his life has to be about a freedom and privacy campaign. Now in exile, he needs to find something to do for the rest of his life. Taking on the media celebrity role that has landed on his shoulders (and essentially being a tv presenter) is a way of doing that. It doesn't mean he has to metamorphose into a hard-bitten incisive journalist. Just let him get on with the rest of his life -- he's sacrificed enough of it already to open up the privacy v security debate to the public in western democracies. There's no need to demand the rest of it off him too.

2 days ago
top

How Does Heartbleed Alter the 'Open Source Is Safer' Discussion?

williamhb Re:Wat? (580 comments)

Huh? The quote is "given enough eyeballs, all bugs are shallow." That's a clear admission that open software, like all other software, contains bugs; that's why you want the many eyeballs. Any claim otherwise is a symptom of not understanding plain English. Eric's whole point was that the bugs in open software will be found and fixed faster than the bugs in other software, due to the population of interested people who will study it, looking for the bugs.

To coin a corollary, because "given enough eyeballs, all bugs are shallow" (Raymond), in practice "given enough eyeballs, each eyeball will all assume the others have inspected the code, and even shallow bugs can remain."

This bug wasn't some deep complex intricacy. It was an incredibly simple and straightforward blunder that went unnoticed for years. If it had broken requests that should succeed, those many eyeballs would have felt some pain and been prompted into finding the bug. But because instead it let you do something you shouldn't be able to (a security hole), people using the library normally did not feel the pain of things breaking. And so, it seems, they weren't motivated to review the code until much later.

The big problem, as consumers of libraries, is that reviews cost. If every update of an open source library means its users then need to review the code change (in case it's brought in a heart bleed like bug that the committers did not check for), then free software becomes more expensive than it used to be. As programmers, each of us uses a lot of free software in most of our projects. And I suspect on average we read less than 5% of the code that is brought in (dependencies can have transitive dependencies... have you reviewed the entire code of the web framework you use plus all its supporting libraries and all those libraries' supporting libraries...?) We are all deeply dependent on some of the other eyes having looked because we simply can't afford the time to look at it all ourselves.

It's not a distinction between open versus closed per se. But as open source has become ubiquitous, so too has a disclaimer of liability. Almost every library we (collectively) use is licensed "AS IS, WITHOUT WARRANTY OF ANY KIND, INCLUDING... FITNESS FOR PURPOSE". In other words, we live in an era where for pretty much every software product, either itself or at least one of its dependencies, carries an all-caps notice that it might be rubbish, unfit for purpose, burn your house down, set fire to your cat. This runs completely counter to an idea of building in safety -- instead, we just disclaim away liability for unsafety.

Here we could perhaps divert into alarmist analogies ... fire extinguishers labelled "WE DO NOT GUARANTEE THIS IS IN ANY WAY SUITABLE FOR PUTTING OUT FIRES" and potentially containing flammable foam? Building industries with no planning, certification or inspection requirements, and where the only requirement for the structural engineer of a skyscraper is that the untrained interviewer thought he answered that brainteaser about how many marbles you could fit in a space helmet very eloquently. But that would be being provocative for its own sake.

More realistically, we're in a world where a very large amount of software development assumes that it is non-critical and that breakage is minor ... even as it runs more and more of the economy. And pretty much everyone does it because we're all so economically optimised that it's the only way we can afford to get anything done.

4 days ago
top

Google To Replace GTK+ With Its Own Aura In Chrome

williamhb Re:Someone please explain (240 comments)

Why it really matters whether Google uses QT or GTK or their own stack. I mean for a GDE or distro like Ubuntu, I can see that "make another one" matters because it impacts all sorts of other projects. For Chrome, though, it doesnt really affect anyone else that I can see, and its really just Gnome folks being upset that Google didnt want to use their stack. At the end of the day, isnt it just more work for Google?

I guess it depends whether their interest in it is limited to "we need something to write Chrome using, and GTK isn't doing it for us any more" or whether they will later be saying "come write apps for Chrome and ChromeOS using NaCl and Aura". Google has taken on their own UI stack -- is their only interest in it really to write just one application? If it is instead another step in the direction of encouraging developers to write apps that only work in Google's browser, that would be interesting to hear.

But I haven't looked into it closely.

about a month ago
top

EA's Dungeon Keeper Ratings Below a 5 Go To Email Black Hole

williamhb Re:Works for Slashdot as well... (367 comments)

While I'd long suspected that Slashdot comments were becoming the community for the irretrievably disgruntled to vote up each other's misanthropy, it's a bit anticlimactic that the community's last and most vehement rant is not about privacy, nor those old favourites "the evils of proprietary software", "the terrible patent system", or "if Microsoft did something today, it must be wrong", but the much more disappointing "how dare the website owners redesign their UI".

And so they marched on Washington, pickets waved in the air, and cries of anger filling the wind, not at the government's policies, nor at its governance of the economy, nor at its honesty or care for the most vulnerable, but at the inferior design of its latest brochure.

about 2 months ago
top

Is Computer Science Education Racist and Sexist?

williamhb Re:Culturally Relevant == Irrelevant to CS (612 comments)

This is totally bullshit and it's being done for bullshit political reasons. Nothing good comes from the politicization of science and yet the politicians cannot resist making a political issue of the lack of "diversity" in CS education. In my own CS experience nobody gave a shit about whether you were black, white, asian or latino and yes we had all of those races represented in the program. What mattered was whether or not you could hack it and continue advancing through the curriculum. The grades were always on a curve and the competition was intense. If you weren't smart enough or fast enough you washed out. In CS, as in other sciences, people respect knowledge, ability and intelligence, not the color of your skin or your cultural background. If you wanted to major in foo-fa the Humanities department was on the other side of campus.

The class you've described doesn't sound particularly healthy -- a culture of competition rather than cooperation ("the competition was intense. If you weren't smart enough or fast enough you washed out") and where grading is not based on whether you're objectively able but just whether you're better than each other ("The grades were always on a curve"). While those might be good for motivating a subset of somewhat ego-driven highly competitive students -- such as perhaps yourself, and also me when I was a student -- they're actually counter to what we're trying to teach. Computing is inherently collaborative, so heavily prioritising competition over cooperation when we teach it is probably quite damaging, and there is no good reason (that I've seen) for a competent course to grade on a curve. As I see it, your grade should not be higher just because you were in a poor cohort with uncompetitive fellows (the curve pushing you up), nor lower because you were in a cohort of very able students (the curve pushing you down) -- your grade purports to be a straightforward and objective assessment of your understanding and performance in the subject, so that is probably what it should be. If, as you've suggested, your whole CS program was a grade-curved culture of relentless competition, then educationally and culturally, that's actually probably not a good thing. Even though you and I might have done very well out of courses like that.

about 4 months ago
top

Paper: Evolution Favors Cooperation Over Selfishness

williamhb Re:Classic bad science reporting (245 comments)

Here's nearly every newspaper article about science ever: "Until recently, scientists believed in $obviously_false_idea, but a recent study shows that..."

The idea that cooperation has been selected for by evolution to some extent is obviously correct, because otherwise we wouldn't have social species that can't survive without cooperation. It's also nothing new, it's one of the central themes of The Selfish Gene that everyone who feigns an interest in science pretends to have read.

I haven't read TFA, but I imagine the study was probably about some detail of how cooperation is selected for.

I have read TFA, and the paper isn't much like the news article at all -- or like the second part of the paper's title. The result is not a general result of "cooperation beating selfishness" -- indeed the algorithms they tested were out-survived by selfishness as well as being out-survived by cooperation. It's a mathematical paper, with a set of simulations, showing that a peculiar set of recently discovered stochastic strategies (ZD strategies) that have a curious mathematical property aren't evolutionarily stable after all. In the paper, ZD strategies are shown to be out-survived in a game of "prisoner's dilemma" both by selfish strategies and by (theoretically weaker) cooperative strategies.

But "this curious class of stochastic strategies you'd never heard of anyway turn out not to be stable in large scale multi-agent computer simulations of the prisoners' dilemma" makes for a rather less gripping headline.

The headline (and end of the paper's title) are spun from the fact that these quirky ZD strategies fail because they fare especially poorly in match ups against themselves in a population. But that is not a general claim that cooperation beat selfishness.

about 9 months ago
top

Microsoft Will Have To Rename SkyDrive

williamhb Re:Good to see (274 comments)

Yes, obviously, one company should own a trademark on any product containing the work "Sky" in it.

Trademarks are limited to particular goods and services, but for the Sky trademark this includes "computer aided transmission of messages and images", "home computing services", "computer programs", etc. As BSkyB are also a broadband ISP in the UK, it seems reasonable that they've registered the mark to include those goods and services.

http://www.ipo.gov.uk/tmcase/Results/4/EU000126425

about 9 months ago
top

MOOC Mania

williamhb Re:The best will rise to the top (102 comments)

I've been "tasting" the various online courses for the last 15 months or so: started with Dr. Thrun's online AI course, have contacts with people at edX, have taken or viewed courses from a half-dozen entities.

One salient aspect of all of the MOOCs is their overall poor quality.

While it's true most early MOOC courses are a bit limited in interaction, pedagogy, assessment, etc, they have already had an interesting effect on universities.

I've been working on smart teaching tech on and off for nearly 10 years, including on an older project by some of the people behind edX at MIT. More recently I've been looking to bring online into the lecture theatre. For most of the time I've worked on teaching technology, I've often heard the reaction that's all well and good but no-one really cares about teaching because academics are promoted on their research and teaching is just something we do to bring in the cash. (There's always been some academics and centres who are very interested in teaching innovation, but it's seemed to me like they've not had as much attention from the rest of academia as they should have.) In the last year or so that seems to have changed. There's a lot more attention been drawn to the idea that yes now is the time to make some changes to how teaching is viewed and done in universities.

about a year and a half ago
top

The Problems With Online Math Classes

williamhb Re:I had the exact opposite experience (285 comments)

Actually different teachers around the world could put up their videos on the same topics.

And the students can go figure out which teachers they understand better.

Then teachers can spend more time on trying to teach the students who still have problems understanding stuff. Or figuring out if the students really understand stuff or even have mastered the topic.

Might take another 20-50 years before that'll happen.

I've been working on the tech for that for a while. (Plus in-class interactivity to incentivise using it.)

And I've just reached the point where I'm looking for some other teachers to help try it out. (Forgiving early-adopters to begin with, of course) Get in touch if you're interested!

about a year and a half ago
top

Windows 7 Overtakes XP, OSX Struggles To Beat Vista

williamhb Re:OS X is THE superior OS (540 comments)

Native software will always have significant advantages over web apps. That being the case there's no reason to assume we'll ever do everything via the browser.

Universal thin clients is as old and unfullfilled a prediction as "The year of Linux on the desktop". And you think Microsoft's vision of the future adds any weight? Ha ha.

The browser itself is a barrier to webapps. In an in-browser app, as soon as you need to include third party content (which might be as simple as you're writing a twitter client and want to show the content of a tweeted URL), you have to deal with pages the browser refuses to load because of X-Frame-Options, REST API calls to, say, Dropbox, where that the browser refuses to return the data for the POST request to fetch the deltas because Dropbox hasn't set the CORS headers, etc. Security measures that are their to protect your app from others too, but it won't take you long to realise that if you instead have a native app with a WebKit pane for the third party content, a lot of the headaches the browser introduces go away.

(And of course as browsers auto-update a dozen or more times per year, and there are a few to support, a webapp is aiming at several uncontrolled moving targets)

about a year and a half ago
top

Bill "The Science Guy" Nye Says Creationism Is Not Appropriate For Children

williamhb Re:So which field of engineering (1774 comments)

No, actually, creationists do *not* believe in a rational ordered universe.

I am a physicist. I don't know what all the laws of physics are, but I believe that there *are* some inviolate laws of physics which apply uniformly throughout all that is. So far as we can tell, this is true: spectral lines in distant stars are the same as they are here, to very high precision, indicating that atomic and nuclear physics are the same. Electrodynamics and such work the same way inside stars as it does in all conditions we've found on Earth.

I suppose you could be a creationist and believe in a deistic universe, where a god chose the laws of physics and then wound up his universe and let it go. But modern creationists do not believe this: they are overwhelmingly Christian, and believe in such things as a god that actively intervenes on this little planet by making virgins pregnant, people turn into pillars of salt -- in general, they believe in miracles, even small ones like altering the genetic makeup of a species. This is the very opposite of a rational ordered universe: all these things, all these miracles, are inherently disordered, since they entail violations of the laws of physics by an entity outside of them. "F=ma, except when god says otherwise" is not a sound basis for a rational theory of the universe.

They do not believe in a completely ordered and repeatable existence. That is, they do not make the (actually inherently risky) assumption that just because we've seen lots of things behave as if they are ordered and repeatable, all things must always be so.

Consider a man in a room who asks his colleague (outside) to bring in any cats he finds that are black so he can count them. If a cat isn't black, don't bring it in -- maybe it'll be black later or maybe it's black in some way I can't see yet, so we'll reserve judgment on it and not include it as any form of evidence either way. Naturally, the man in the room can only see an endless stream of black cats, and might (wrongly) be tempted to think that is ever-increasing evidence that all cats are black. Unfortunately, that is the position of a man who believes that because we've found more and more orderliness and repeatability in the universe, existence must all be orderly and repeatable.

Those who believe in any form of divine action (including but not limited to creationists) are actually rather more rational in their conclusion in this regard: "Wow, there's a lot of black cats, and we can probably find lots more, but I'm not going to insist they all must be". Separate sources of evidence then give them reason to believe that God is an example of something that is real but not mechanistically repeatable.

Interestingly, a lot of the early impetus for science -- the notion that the universe would be orderly at all -- came from the religiously-derived belief that it would be ordered because it would obey laws laid down by God for it. That derivation, of course, does not exclude divine action nor does it have your slightly obsessive "all or nothing -- either it's completely ordered always or otherwise it's a totally irrational model" mantra.

Funny anecode/aside, but "F=ma except when god says otherwise" is not only a rational model but is precisely what is modelled in almost every simulation -- almost every simulation I have seen or written has allowed its author to pause it, change a variable, and then set it going again.

about a year and a half ago
top

Ask Slashdot: Is the Rise of Skeuomorphic User Interfaces a Problem?

williamhb Re:Shit Editors (311 comments)

The arguments against skeuomorphic design are that skeuomorphic interface elements use metaphors that are more difficult to operate and take up more screen space than standard interface elements; that this breaks operating system interface design standards

Personally I'd argue that skeuomorphic designs are almost certainly worse for usability, but that might be outweighed in marketing by their attractiveness / emotional connections with the product.

In UI design, it seems to me that one of the things you're trying to do is communicate relationships between the various controls, the things they manipulate, etc. And you have a two-dimensional non-tangible interface with which to communicate those relationships. (Even with touch, you're not actually "pressing a button" you're tapping on a coloured region of glass.) The trade-offs that optimise communication are almost certainly different than if you have a tangible three dimensional interface (eg, a physical tape recorder, instead of an audio memo app). In a skeuomorphic app, you do not have the physical haptic pliability of the button to your thumb, just a slightly wobbling brown graphic. In a skeuomorphic app, you do not naturally see the item in three dimensions as you pick it up and its orientation to your eye changes on the journey to a comfortable manipulation distance. You just have a flat graphic of a pretend item from a preset angle. The affordances are different, so the optimum design to help the user achieve their goals is probably different.

The example I'd use is Windows -- over a decade or two it has steadily moved away from previously being skeuomorphic (eg, panels looking like they're in little bevels, buttons looking like square raised things) to something much cleaner. Those bevels etc introduced lines that distracted ("why is my eye drawn to a bevel that does nothing again?") and made an element feel divided from the surrounding controls that they probably wanted to communicated were relevant to it not separated from it.

The exception however is marketing and the attempt to get a purchaser to emotionally engage with an item (rather than find it easy to use). A picture of a beautiful old tape player is probably more appealing at first glance in the Apple Store than a white background with clearly distinct controls. Likewise a slightly harder to use item might feel as if it can do more even if it can't.

about a year and a half ago
top

How Long Do You Want To Live?

williamhb Re:Keep Paying for Your Spot in Heaven (813 comments)

There was a cool thing that happened to me when I figured out that the Law of Parsimony indicates that life is the end.

You do realise that the "Law of Parsimony" says precisely nothing about reality itself? That there is no actual law of nature that demands the universe to be uncomplicated, or entirely predictable, or repeatable? That the "Law of Parsimony" is a convention to aid the progress of academic models? (It's easier to deal with small incremental additions and alterations to models in experimentation and review than large changes, and if an aspect of the universe isn't predictable or reliably repeatable then science and experimentation are buggered so far as it's concerned). If you believe, however, that just because we want our academic models to be parsimonious that the universe itself must also be, then you are, frankly, a bit silly. Science does not (so far as we know) alter the nature of reality, and reality is at liberty to be as complex, unrepeatable, or mystical as it happens (or happens not) to be, and there ain't a darn thing we can do about it.

Likewise, if you believe your personal model should be parsimoniously limited to the academic model of the day, you are equally silly -- the rules governing types of evidence, parsimony, etc, are not there to optimise your model, but to optimise academic processes and potential future academic models some hundreds of years hence (science never ends) and make them more manageable. If God turns up and punches you on the nose, it will still not be science but you would be well advised to pay attention.

about a year and a half ago
top

University Receives $5 Million Grant To Study Immortality

williamhb Re:The Answer for $5M (532 comments)

The post I replied to was asking about consciousness, intelligence, and the brain. I ignored the unanswerable one, and addressed the other two. ...

gweihir's post you replied to was not "asking" and was itself a reply to the statement "That doesn't mean we fully understand it, or that we ever will, but consciousness very obviously arises solely out of the brain.". It appears you decided to silently ignore the topic of conversation! And if you read it again in the context of the thread, you should see your post appears to (apparently accidentally) read as if it is claiming there is experimental data for precisely what you now say is unknowable.

about a year and a half ago
top

University Receives $5 Million Grant To Study Immortality

williamhb Re:The Answer for $5M (532 comments)

Define consciousness.

That's actually precisely my point. A claim "there is experimental data (about X)" is meaningless if (i) the experiments were about something else, and (ii) X cannot be sufficiently well defined to conduct any experiments on it. As I've had to point out elsewhere on this topic, "consciousness" is necessarily defined first person ("I am" not "you are" nor "my neurons happen to fire in a particular way") which is precisely why we can never have experimental data on it -- because it is by definition only accessible first person and our experiments are by definition only reportable third person. Foolish attempts to duck the issue by "let's just call something else 'consciousness' and pretend we've answered the same question" are just that.

about a year and a half ago
top

University Receives $5 Million Grant To Study Immortality

williamhb Re:The Answer for $5M (532 comments)

That's not an answer. And I think you know it as well as I do.

First, I'm quite familiar with computability and Godel's incompleteness theorems, and they have nothing at all to do with the question. There is absolutely nothing in them that implies AI is a fundamentally unsolvable problem. In fact, most AI researchers probably understand those subjects far better than you do - and they still consider AI to be a worthwhile problem to study.

Second, the current state of the art in AI is irrelevant. You didn't just say there were things computers can't currently do. You said there are things no computer can ever do, no matter how powerful. That claim needs to be justified.

Third, AI has actually been making dramatic progress in recent years. After decades of following paths that didn't lead anywhere, researchers have finally found some techniques that enabled major breakthroughs. If you want to see practical applications of that, just look at Watson or Siri or the like. A mere five years ago, both of those would have been science fiction. Today they're real.

Ironically, the recent advances in "AI" were largely through stopping thinking about AI as building an artificial version of human cognition, and instead doing something with far fewer philosophical claims: statistically mining the heck out of big data.

about a year and a half ago
top

University Receives $5 Million Grant To Study Immortality

williamhb Re:The Answer for $5M (532 comments)

No, there is experimental data as well. Lobotomies are a result of "change the brain, change the intelligence/personality" experiments. There is direct experimental data, not just external observation.

The word of the day, "consciousness," appears markedly absent from your description of the experimental data -- you are making a false connection between consciousness and personality. Or to put it more humorously: plenty of observers around me can tell you my intelligence and personality are different before and after my first cup of coffee in the morning, but I can assure you I'm still the same consciousness...

about a year and a half ago
top

University Receives $5 Million Grant To Study Immortality

williamhb Re:The Answer for $5M (532 comments)

No, "emergent property" is a term co-opted from complex systems research that if you have enough agents with fairly simple rules (say, termites laying down pheromones) you can get some astonishingly complex macroscopic behaviours (say, termite nests).

That's what information processing is.

No, it's not. It is something that may occur in multi-agent systems, but it is not "what information processing is".

(Snipped the remainder of your post as it's essentially irrelevant posturing.)

about a year and a half ago
top

University Receives $5 Million Grant To Study Immortality

williamhb Re:The Answer for $5M (532 comments)

"Emergent property" is in a non-hand-wavy language is known as "information processing". It's what computers do.

No, "emergent property" is a term co-opted from complex systems research that if you have enough agents with fairly simple rules (say, termites laying down pheromones) you can get some astonishingly complex macroscopic behaviours (say, termite nests). It gets co-opted and used in some hand-waving to try to explain away consciousness as an "emergent property" (and abstraction) of there being lots of neurons in the brain.

Unfortunately, it also entirely misses the point -- consciousness is not an abstraction, it is atomic (as you sit staring out of your eyes -- which by definition is the only way you have of knowing that you have a consciousness -- you experience only the consciousness and not the neurons).

Not only we do understand that as a result of rigorous scientific research, it's such an important piece of knowledge that a person who "disagrees" with it, should be considered to be unqualified for any kind of discussion related to science or brains, in the same way as a person who believes that Earth is flat is unqualified for any kind of discussion related to geography or astrophysics.

Your rhetorical bluster merely suggests you don't understand of how science works. There is no such thing as "unqualified for any kind of discussion" -- when you submit a paper for review you will never be asked your qualifications or whether you agree with a prescribed set of opinions. Certainly I am yet to contact the author of any paper I have ever reviewed to say "Before I read your paper, I just want to check you're fundamentally opposed to dualism..."

Moreover the way we choose to define science (necessarily third party observational) makes this question inaccessible as it is necessarily first party observational. The philosophical question has always been "I am" not "my neurons fire in a particular way", nor even "you are" or "he is".

Nobody knows how strong/true AI could be built

I do! There are over 7 billions of examples of it currently in use!

You are aware of what the "A" stands for in AI, aren't you?

On top of that, there is plenty of math, however the same can be said about any computer.

Nope, we've found out since the '60s that things work remarkably differently than a computer, and AI is no longer slavishly attempting to replicate theories of human cognition in algorithmic form.

about a year and a half ago

Submissions

top

Rumours LHC has identified two strangelets

williamhb williamhb writes  |  about 4 years ago

williamhb (758070) writes "As you may be aware, the CERN large hadron collider performed it's first high energy collisions just the other day. The official statements are that it will take many months or even years to collect and analyse the data. However, early data already suggests some interesting findings.

Not one not one but two kinds of strangelet appear to have been created. The first turns other particles into strangelets as it contacts them. This might have caused a world-ending reaction had it not been for the second kind. Once a sufficient density of the first strangelets exists in an area, a kind of anti-strangelet appears. This exerts a strong attraction force on other particles, and causes the first strangelets to revert back to normal matter. The strong attraction force suggests this might be an unexpectedly different form of the long sought-after Higgs particle, though this is yet to be confirmed.

The scientists have informally dubbed the first strangelets as "wise particles" or "cluons" as they mimic the way knowledge is passed from person to person. The opposite strangelets' flocking behaviour and cancellation effect on wise particles has led to them being dubbed "fools".

So, today April 1st, scientists can claim to have identified Higg's Bozo."

Link to Original Source

Journals

top

Message topic

williamhb williamhb writes  |  more than 5 years ago Just a blank journal post so people without my email address can contact me away from the general hubbub. (Though NB: comments can be read by others, I believe!)

Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...