Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Semantic Web Getting Real

kdawson posted more than 6 years ago | from the open-it-up-and-give-it-away dept.

The Internet 135

BlueSalamander writes "Tim O'Reilly just did an interview with Devin Wenig, the CEO-designate of Reuters. With no great enthusiasm I started to read yet another interview on how the semantic web was going to make everything great for everybody. Wenig made some good points about the end of the latency wars in news and the beginning of the battle for automatically detecting linkages and connections in the news. Smart news, not just fast news. Great stuff — but just more words? Nope — a little searching revealed that Reuters just opened access to their corporate semantic technology crown jewels. For free. For anyone. Their Calais API lets you turn unstructured text into a formal RDF graph in about one second. I ran about 5,000 documents through it and played with a subset of them in RDF-Gravity. The results were impressive overall. Is this the start of the semantic web getting real? When big names and big money start to act, not just talk, it may be time to pay attention. Semantic applications anyone? The foundation appears to be here."

cancel ×

135 comments

Sorry! There are no comments related to the filter you selected.

Semantic Spam (2, Insightful)

Rog7 (182880) | more than 6 years ago | (#22374878)

Next up, semantic spam.

Actually, I think it's beaten the rest of the content to the punch. =(

Re:Semantic Spam (4, Funny)

Reverend528 (585549) | more than 6 years ago | (#22374892)

Well, as long as the spammers stick to the spec and use the <RDF:spam> type for their content, then it should be pretty easy to filter.

Re:Semantic Spam (4, Insightful)

fonik (776566) | more than 6 years ago | (#22375298)

And this seems to be a major problem of the whole semantic web buzz. Search engines like Google can cut down on abuse because they're a third party that is unrelated to the content. The whole semantic web thing offloads categorization to the content source, the very party that is most likely to try to abuse the system.

It just doesn't seem like the best idea in the world to me.

Re:Semantic Spam (5, Funny)

Necrobruiser (611198) | more than 6 years ago | (#22375598)

Of course you realize that this will just lead to a bunch of neo-netzis with their anti-semantic remarks....

Re:Semantic Spam (2)

UbuntuDupe (970646) | more than 6 years ago | (#22375902)

*fighting urge not to say it...*

I don't think they're being anti-semantic when they say things like that, they're just saying that newish tag systems should be segregated off to a special "neighborhood" of Web, maybe marked with a special star or something, so that they can easily be deleted if they turn out to cause trouble.

(Admit it, you smirked...)

So does that mean... (0)

Anonymous Coward | more than 6 years ago | (#22376076)

we're going to see a lot of semantic goosestepping and .sig <h1><i>l!</i></h1>s?

Re:Semantic Spam (2, Informative)

msuarezalvarez (667058) | more than 6 years ago | (#22375612)

This is slashdot and all, I know. But you seem not to have read even the summary: this is about someone exposing an API which lets you turn text into and RDF graph independently of the text producer. If you want, this something like someone giving you access to a tool like the one used by Google.

Re:Semantic Spam (1)

fonik (776566) | more than 6 years ago | (#22377186)

Yeah, I did read that and I was speaking generally about the whole semantic web buzz and not specifically about the artcile. This is a case of a single third party categorizing a large amount of data. Since they are all categorized in the same way the potential for abuse is low. But is that an improvement over current search algorithms?

Re:Semantic Spam (1)

recharged95 (782975) | more than 6 years ago | (#22376422)

Google is tied to all their content cause that how they make money (it IS related). Also, no different if a 3rd party choose not to cut down on the abuse--now you have 2 parties to convince to not abuse. Do no evil is subjective remember?

Re:Semantic Spam (4, Informative)

SolitaryMan (538416) | more than 6 years ago | (#22377354)

And this seems to be a major problem of the whole semantic web buzz. Search engines like Google can cut down on abuse because they're a third party that is unrelated to the content. The whole semantic web thing offloads categorization to the content source, the very party that is most likely to try to abuse the system. It just doesn't seem like the best idea in the world to me.

I think you are missing the point of Semantic Web: you can refer or link to an object, not just document.

The company declares its URI. Now, If you are writing an article about this company, you can uniquely identify it and every web crawler knows *exactly* what company are you talking about. If the URI for the company is a hyperlink to its web site, then it can't be abused: the company itself declares what it is. The unique URI will in fact be a link to some file with information about company (maybe an RDF file -- doesn't really matter for the concept)

The system can (and will be abused) in the same way as an old web: irrelevant links, words, concepts -- nothing new for the crawler and can be defeated with existing techniques.

Again, Semantic Web = Links between concepts, not just documents, so please do not bury the good idea under the pile of misunderstanding.

Re:Semantic Spam (2, Interesting)

nwbvt (768631) | more than 6 years ago | (#22378122)

It does seem like we are in a cycle. Way back in the days when dinosaurs like Lycos and Hotbot ruled the search engine world, information on the net was categorized by tagging. Those of you over the age of 17 remember it, back then if you did a search for "American Revolution" half your results would end up being porn sites that put meta tags containing the phrase "American Revolution" on their page (although I can say those were great days to be a teenager). Then Google came about with their new "Page Rank" system which was much harder (though still not impossible, look up Google-bombing or the church of scientology's use of Google for more details) to fool. Now all of a sudden we hear talk of going back into a world of tags that are being advertised as more "democratic" and this more sophisticated (but similarly flawed scheme) known as the "semantic web". Who wants to bet this new system won't last more than at most a year or two?

Re:Semantic Spam (1)

ultranova (717540) | more than 6 years ago | (#22378128)

In Soviet Russia, the system abuses you !

Re:Semantic Spam (1)

semanticsearch (1157807) | more than 6 years ago | (#22378336)

The idea is that there is also an identity and trust infrastructure. Take this and mix with OpenID and you can marginalize spam (as we know it). I know Slashdotters are fond of some kinds of spam.

Re:Semantic Spam (2, Interesting)

soxos (614545) | more than 6 years ago | (#22379032)

The whole semantic web thing offloads categorization to the content source, the very party that is most likely to try to abuse the system.
That's the same criticism given to Wikipedia or unmoderated Slashdot. Consider Semantic web for discovery combined with moderation and see that there could be something to this.

Re:Semantic Spam (1)

smitty_one_each (243267) | more than 6 years ago | (#22374942)

Much of the output of the various news sources today is, arguably, spam.
So the question I would have liked to pose is:
Since we can't filter out bias, how can the technology help to make the news biases more transparent and quantifiable?
For example, work like this about VP Cheney [newsbusters.org] deserves to be bagged, tagged, and ignored, for it is a blemish on the face of legitimate journalism.

Symantec Web? AHHHHHHH!!! (1)

Akaihiryuu (786040) | more than 6 years ago | (#22375130)

Am I the only one who misread that?

Re:Symantec Web? AHHHHHHH!!! (4, Funny)

bane2571 (1024309) | more than 6 years ago | (#22375200)

I read it like this:
Semantic web getting real [player]
and immediately thought "it was bad enough when the original web got it"

Re:Symantec Web? AHHHHHHH!!! (3, Funny)

gotzero (1177159) | more than 6 years ago | (#22375226)

"Please note this environment may not be completely safe, so we are going to prevent you from entering. We have also initiated so many system processes that it will simulate a virus on this system."

The links in that article are neat. I am looking forward to watching the maturity of this!

Actually... (0)

Anonymous Coward | more than 6 years ago | (#22375538)

Instead of that, I misread Calais as Cialis, which wasn't helped by the first post being about spam...

Re:Symantec Web? AHHHHHHH!!! (1)

webmaster404 (1148909) | more than 6 years ago | (#22375562)

Nope, I did too, and I was wondering... does this mean that Norton won't crash and slow down Windows computers more then most spyware/viruses?

What? (0, Offtopic)

TubeSteak (669689) | more than 6 years ago | (#22374894)

Is the semantic web supposed to be one of those Web 3.0 things?

Re:What? (2, Interesting)

owlnation (858981) | more than 6 years ago | (#22375172)

Yes -- essentially.

And the only reason we moved from Web 1.0 to web 2.0, and the only reason we need to move from Web 2.0 to Web 3.0 is...

We are still stuck on Search 1.0

Well, ok, to be fair to Google -- Search 1.5

Sorry, but we won't see much improvement in utility until someone rolls out Search 2.0. That is a product LONG overdue.

Re:What? (2, Insightful)

STrinity (723872) | more than 6 years ago | (#22375328)

Is the semantic web supposed to be one of those Web 3.0 things?


If by that you mean "a collection of buzz-words that everyone uses without having Clue 1 what the hell they're talking about," yes.

Content? (4, Insightful)

Walzmyn (913748) | more than 6 years ago | (#22374934)

What good are fancy links if the content still sucks?

Command line vs GUI all over again (3, Interesting)

EmbeddedJanitor (597831) | more than 6 years ago | (#22375072)

THis looks like command line vs GUI wars all over again. GUIs are fine for rapidly hitting easy-to-find targets but sometimes typing is far easier and faster. Lumbering crap GUIs are really hard to drive (eg. MS Visual Studio).

Semantic webs might be OK for small document sets where you can visualy search tags and click them. Want to look up something about monkeys? Look for the tag that says monkeys (or maybe find primates first, then monkeys) and click it.

But for huge data sets this sucks. After a smallish number of documents & subjects it must be far easier to type monkeys in search box and have Google etc do the search.

This might work for handling some queries, but will suck supremely for complex queries over large data sets (eg. the whole www).

Re:Command line vs GUI all over again (3, Interesting)

smurgy (1126401) | more than 6 years ago | (#22375876)

I really think you're forgetting about the power of booleans over indexed content and the weakness of string searching. Positing a tag-dense web search in which autoindexers crunch tags for every page as one containing an overabundance of hits compared to string searching is arguable, but in fact what tag searching does is provide a far meaningful range of hits. There might or might not be more, but it's better.

We need to couple the proposed "semantic web" with more than the single-box search page or rather, allow users who can't cope with anything beyond single-box and/or learning to use operators to have their good old search google interface as a second option and put the current advanced search on the front end.

Pie in the sky I know, but I like to think that the drive to search simplicity is reflective of the needs of the last generation (scared of information density) and not of the potential of the future ones (growing up searching).

I can handle a search pretty well, and I'd enjoy getting more of a chance to search for meaning not just strings. Think of a search page with a theoretically infinite number of boxes - each box drops down to a specific type of search (tags, headers, content etc.) and operator, each box I can put an importance rating (so pages with matching tags are vital, pages with matching strings rank higher but aren't necessary etc. etc. etc. depending on my needs) and under the bottom box is a spawn new box button. If I don't like my results I customise my search, search-in-results, change my elements.

Professionally I work with custom-indexed databases all the time and it's a pain in the behind to know the amount of information available of the net but be faced time and again with its limitations. Every criticism you make of semantic searching here applies ten times over to string searching. Should the tag creation software be able to match human tagging in accuracy it would easily override it in coverage. As to accuracy, look at the tags assigned here. The article references the OS release of Reuters' Calais, and someone's assigned the tag "vaporware". Given that vaporware is by definition unreleased (and never to be released) software I'd say human tagging is running at 33% failure on this article at least.

Where's the Money? (2, Interesting)

Blakey Rat (99501) | more than 6 years ago | (#22374936)

I've never understood what the financial benefits for a site joining the semantic web are supposed to me. Reuters may be one thing, but how would you sell this technology to Amazon? Or NewEgg? If commercial sites can't/won't use it, how is it supposed to gain critical mass?

Re:Where's the Money? (5, Insightful)

QuantumG (50515) | more than 6 years ago | (#22375002)

Yeah, it won't matter until Google starts getting in on the act. When you can search for "a website where I can get free kittens and other pets" and get exactly that, instead of just sites that have those keywords in it (like this message in a day or so), then it will be valuable for people to RDF their site and maybe even look at the mess that the translator makes and clean it up.

long way to go.. (1)

emj (15659) | more than 6 years ago | (#22375126)

there is some thing similar at http://www.powerset.com/ [powerset.com] they are still in Beta though, and it's not working that great. We will never get perfect matches from computers, but the question is if semantics will ever be better than just keywords.

Re:long way to go.. (2, Insightful)

QuantumG (50515) | more than 6 years ago | (#22375238)

blah, search is great and all, but that shouldn't really be the ultimate purpose of the Semantic Web.

Asking a question and getting a sensible answer, that's the killer app.

Re:Where's the Money? (2, Insightful)

ushering05401 (1086795) | more than 6 years ago | (#22375032)

Feeding Proxies is one potentially lucrative use of semantic technology.

Here is a basic scenario for ten years down the line:

1. You build a profile probably through a combination of allowing your online activities to be profiled, filling out in-depth surveys, and rating certain types of web-content on a semi-regular basis.

2. A proxy identity is imbued with a 'personality' based on both your preferences as represented in step one, and ongoing analysis of content that causes you to register a strong reaction.

3. The proxy consumes content and delivers what it believes to be desirable content to your device of choice.

Given this business model we could see a return to the old 'portal' style of doing web business - though the portal itself would be largely invisible to the subscriber. Anything as simple as changing diction of a news item could vastly alter the interest of the proxy public.

Re:Where's the Money? (3, Interesting)

pereric (528017) | more than 6 years ago | (#22375414)

If I have a business selling - for example - bicycle pedals, being well listed at www.bike-pedal-finder.com, or by users of some yellow pages could certainly help my business. If the search engines could use information like below, it will probably help:

<dealer name="my company">
  <in stock>
    <pedal model=M525 price=20E>
    <pedal model=M324 price=10E stauts=pre-owned>
  </in stock>
  <location> ... </location>
  <shipping> ... </shipping>
</dealer>

Re:Where's the Money? (1)

sime0n (1037194) | more than 6 years ago | (#22376468)

I've been wondering the same thing, actually, and found this post on ReadWriteWeb on Dapper's plans to use semantic data to drive an advertising network pretty interesting: http://www.readwriteweb.com/archives/dapper_funding_the_semantic_web.php [readwriteweb.com] For a company like Reuters, I could see them driving ads for country, market, or industry reports using the tags embedded in their stories, or let other businesses further down the information analysis pipeline do the same.

Anti- Semantic comments in 3 ... 2 ... 1 ... (1, Funny)

Anonymous Coward | more than 6 years ago | (#22374946)

And now for a host of Anti-Semantic comments in 3 ... 2 ... 1 ...

Well, I am sure the authors will just call them Anti-Zio[a]ntic comments.

I beat off on my router... (0)

Anonymous Coward | more than 6 years ago | (#22375042)

... now I can surf the sementic web.

Yawn... (4, Interesting)

icebike (68054) | more than 6 years ago | (#22374966)

So I need this WHY?

Most websites have little to say, and take all day to say it.
Having a detailed graphical analysis of the blather seems unlikely to improve the situation. GI,GO.

It would seem spending just a tad more time writing for HUMANS would be way more productive than writing for machines. Having a thousand computers watching your 100 monkeys seems unlikely to bring enlightenment or useful knowledge out of a pile of garbage and human blathering that passes for information on the web these days.

People used to write web pages.
Now they write software to write web pages.
Its not surprising they now need to write software to understand the web pages.
Whats the point?

Re:Yawn... (2, Informative)

InsurgentGeek (926646) | more than 6 years ago | (#22375086)

You're a little unclear on the concept of an RDF graph. It's not a graph like your intro algebra class - it's a RDF (thats Resource Description Framework) representation of the semantics of a document. Check Wikipedia for Semantic Web or RDF.

Re:Yawn... (4, Interesting)

QuantumG (50515) | more than 6 years ago | (#22375092)

Writing AI that can read English (and all the other languages) and figure out the meaning is just, well, taking too long. But let's say it wasn't.. what would be the point? Would you say there was no point? Or would you say it was freakin' awesome and look forward to the day when you can actually ask a question and get a sensible answer from a machine?

Well, if we are very forgiving we can get this kind of thing happening with current technology, we just have to supply all the "content" in a form that our primitive algorithms can handle. The Semantic Web is that. Maybe around the 3rd generation of these algorithms we might be ready to do the translation to machine form automatically.. maybe not.. but at least the Semantic Web people are again talking about translation.. was a time when they all said it was a fruitless path and the best way was to just supply applications for creating machine readable content easily.

Re:Yawn... (1)

InsurgentGeek (926646) | more than 6 years ago | (#22375122)

Perfect! A concise reasonable explanation. Thanks.

Re:Yawn... (1)

jlarocco (851450) | more than 6 years ago | (#22375422)

I can already ask Yahoo or Google a question and get a sensible answer. I guess I'm missing how this "semantic web" thing equates with AI that understands the meaning of English.

Besides that, if you rely on the "content providers" to provide the meta-data the system is less than useless. Legitimate sites won't use it or update it, and illegitimate sites will abuse the system.

Re:Yawn... (3, Interesting)

QuantumG (50515) | more than 6 years ago | (#22375494)

Uh huh.

When is the next shuttle launch? [google.com]

This is the first hit, not shuttle launch info. [nasa.gov]

This is the second hit.. [nasa.gov] ah hah! The next launch is on Feb 7.. wait a minute, it's Feb 10! Was it delayed or something? Oh, I see, it says "Launched".. great, when's the next one.. March 11 +.. hmm.. wtf does + mean? Apparently I need to read this [nasa.gov] and hmm.. nothing there about what the + means.. I guess it means it might get delayed, they do that.

See all that reasoning I had to do? See how long that took me? That's what the Semantic Web is for.

Re:Yawn... (1)

MightyYar (622222) | more than 6 years ago | (#22375628)

You are pretty knowledgeable about this stuff, so I'm going to ask you:

How does this stuff handle abuse? I mean, what's to stop Senior Spamalot from marking up all his machine-readable stuff for shuttle launches, but actually dishing you to a Viagra page? I don't understand how the "Semantic Web" won't be terribly abused.

Re:Yawn... (3, Insightful)

QuantumG (50515) | more than 6 years ago | (#22375636)

How do *you* know when information is bullshit?

How does Google's pagerank algorithm?

Re:Yawn... (2, Interesting)

MightyYar (622222) | more than 6 years ago | (#22378368)

It's a damn good point, but I'm better at it than a computer. Though to tell you the truth, Google's spam filter on gmail is darned close to perfect (once trained) - so I can see how they would be able to filter the information using something akin to their spam filter. And they'd still use something like pagerank to rank the results, so that might go a long way toward nailing the spammers.

But I wonder whether that approach is going to be any simpler or more effective than just developing better or more intelligent search algorithms? Then they don't have to determine whether or not the information is bullshit, because chances are that I'm not searching for herbal Viagra so my search terms aren't in the page.

It's not just spammers that will throw a wrench into the semantic web... what if I accidentally leave out the metadata for a page? Or make a cut-and-paste error and forget to edit the metadata so that it is completely wrong for a page? The answer, as I see it, is computer-generated metadata... at which point, why not just build that functionality into your search engine?

By the way, if you instead search for "Space Shuttle Launch Schedule", the first result on Google [nasa.gov] is very apropos. I often find that Google rarely leads you astray once you learn to think like a search engine (which isn't very hard - they are dumb). But I'll grant you that a more natural language for search queries would be a boon for beginners.

Oh, and the plus after March 11? There is a legend at the top of the page:

Legend: + Targeted For | * No Earlier Than (Tentative) | ** To Be Determined
:)

Re:Yawn... (1)

dkf (304284) | more than 6 years ago | (#22378894)

The answer, as I see it, is computer-generated metadata... at which point, why not just build that functionality into your search engine?
Yahoo are already doing that. If you go to their search page [yahoo.com] , enter some search term (e.g. "linux") and search. Now, on the results page there should be a little arrow down at the bottom of the top bar; click on that and it will open up a panel that includes concepts linked to the search terms (and also possible refinements of the search). I know (from talking to the people at Yahoo) that they're deriving the concepts automatically from their spidered data, and it works really well.

How resistant is it to spam? No idea, to be honest!

Re:Yawn... (1)

Dan East (318230) | more than 6 years ago | (#22375858)

Obviously the current searches are not semantic, so the key is searching for the right thing. At first glance, your query sounds simple enough. However, the problem is that there simply may not be any webpages dedicated to providing the exact information you asked for. In this case, are there webpages that are kept up-to-date with information specific to the next shuttle launch? What you really need to search for is not the "next" shuttle launch, whose definition is always changing, but "shuttle launch schedule" [google.com] , or even simply "shuttle schedule" [google.com] .

Should it be easier to search than that? Sure, that would be nice. My biggest concern is that since the semantic engine is trying to infer meaning to your query (specifically, display pages that don't explicitly match your query - in this case, not when the next shuttle launch is, but simply the current shuttle launch schedule), it would be open to even more abuse through spamming and PageRank type abuse.

Dan East

Re:Yawn... (4, Insightful)

QuantumG (50515) | more than 6 years ago | (#22375968)

Ok, you seem to be of the belief that I'm still talking about search.. in the classical "give me a web page about" sense. I'm not.. and the Semantic Web people are not. "next" has a meaning.. everyone knows what it is. "shuttle launch" has an almost unique meaning.. although some concept of our culture and common sense is needed to disambiguate it. Asking when the next shuttle launch is has a unique answer: a date and a statement of the confidence in that date. For example "March 12, depending on weather and other things that might scrub the launch." I don't expect this to be "webpages that are kept up-to-date with information specific to the next shuttle launch"... I expect the answer to my question to be synthesized in real time from a dynamic pool of knowledge which is obtained from reading the web. I want a brain in a jar that is at my beck and call to answer every little question like this that I have through-out the day.. on everything from spacecraft launches to what the soup of the day is at the five closest restaurants to my office. There doesn't need to be some web page that is updated daily by some guy who works near me and enjoys soup.. there just needs to be information on soup and location posted by restaurants in my area.

So am I talking about search? Well, yes, but its an algorithm that uses search to answer my questions.. instead of me having to do it.

Think about that soup question.. how would you do it now? I'd go to Google maps.. enter the location of my office, search businesses for restaurants, click on one of the top 5 to see if they have a daily updated menu, note the soup of the day, go back to Google maps, click on the next one, etc, until I had the answer I wanted. That's a pretty simple algorithm.. it's something a machine learning system could come up with.

Re:Yawn... (0)

Anonymous Coward | more than 6 years ago | (#22377170)

So basically you're talking about dumbing down something that people already know how to use and do not find complicated, by rehashing the same old Star Trek interface ideas nobody cared about in their prior iterations.
MS Help had this, Ask Jeeves had this, people simply don't like it. It's not useful. Can you Semantickers move on with your life and stop bothering us already? Thank you.

Re:Yawn... (1)

chthonicdaemon (670385) | more than 6 years ago | (#22375788)

I must say I don't think it's quite as "freakin' awesome" as you seem to. I believe that natural language is not only hard to handle correctly, but also hard to use correctly. There is a reason why we have formal specifications and legal language -- "natural" language is just too vague. Now in some niche areas where you don't have your hands available I can see the allure of voice recognition, but I honestly think that speaking to computers to have them do stuff in anything resembling natural language will be harder to use to get to a specific goal than what we have now. I suppose if you just want some kind of result, that's not so bad, but I kinda like getting exactly what I ask for. A much better argument here [utexas.edu] . I know it's about programming, but that's basically what we do with computers on any level of use.

Re:Yawn... (1)

QuantumG (50515) | more than 6 years ago | (#22375814)

So ask in a formal language.. point is, we can't even ask questions now.

We can't even ask questions about systems which are designed to be machine readable. Look at software debuggers.

Re:Yawn... (2, Insightful)

martin-boundary (547041) | more than 6 years ago | (#22376206)

You think that if we feed weak AI algorithms a lot of cleaned up, pre-tagged data, that's going to help overcome the weakness of the algorithms and produce something worthwhile?

Sorry, there's a flaw in your reasoning: Who gets to pre-tag the data? Everybody. But you can't trust everybody on the net. So you'll get a lot of data that's specifically designed to confuse and subvert the weak algorithms, and by definition such algorithms aren't strong enough to rise to the challenge.

The Semantic Web people will get a nasty shock when they realize that what they've really got is the Spamantic Web.

Re:Yawn... (1)

QuantumG (50515) | more than 6 years ago | (#22376266)

Blah, vetting the quality of your inputs is necessary but it's a completely different algorithm to answering queries. This is already true of search engines... and we have good ways of handling it. But hey, you're the kind of person who gives up looking for a job because you're sure no-one will hire you.

Re:Yawn... (1)

martin-boundary (547041) | more than 6 years ago | (#22376390)

1) "vetting the quality of your inputs" is not AI. It's just putting in what you want to see coming out, assuming you understand sufficiently the way the particular algorithm you're tweaking works.

2) "we have good ways of handling it" is a euphemism for human beings. Yes, just throw people at the problem and let them censor the bits of data that they don't like. Again, you're just letting in what you want to see coming out. Search engines have teams who get paid to scrub their data. It's not AI. We still get tons of garbage in searches.

3) I'm the kind of person who doesn't like being swindled with big words which hide thin deliverables. The problem with claiming AI power which depends on human power behind the scenes is that human power on the net just doesn't scale.

Re:Yawn... (1)

QuantumG (50515) | more than 6 years ago | (#22376486)

You must be living in some other world to me. Google search results are not vetted by humans. It's this little algorithm called pagerank.. you might have heard of it.

Re:Yawn... (1)

martin-boundary (547041) | more than 6 years ago | (#22376612)

Perhaps you should read up on it? PageRank proper is only a small factor in Google's index sorting method. Other factors are ad hoc things like weights for whether words appear in headings or paragraphs, whether the page is full of hidden keywords, whether the word "homepage" appears prominently etc.

PageRank itself is merely about counting links, which is entirely independent of content, and not as useful on its own as you might think. For example, there's no guarantee that an index page will appear before a subordinate page if all you use is PageRank, so PageRank is simply overruled. There's special code just to try and make sure people's homepage appears first when their name is put in to the search box.

Google's search results are vetted by teams of humans all the time. That's also the only way so called spam pages can be identified. Once it becomes clear there's a trend, an ad hoc censoring algorithm can be written to hide those kinds of spam pages from the returned results. And if someone complains, some more ad hoc code might be written to fix the bugs in the censoring algorithm.

In any event, there's a whole lot of human oriented massaging of results to comply with criteria that simple algorithms can't discover on their own. And still Google's search results are full of dupes, they aren't clustered properly, and are often out of date, or haven't you noticed?

Re:Yawn... (0)

QuantumG (50515) | more than 6 years ago | (#22376630)

Hehe, no, maybe *you* should read up on it.

Re:Yawn... (1)

martin-boundary (547041) | more than 6 years ago | (#22376638)

Right, whatever.

Re:Yawn... (1)

ThePromenader (878501) | more than 6 years ago | (#22376860)

Page rank is a gauge of popularity, not content, more than anything. It's a factor that only comes into play when google's algorithm judges your content at the same level of that of another page as an answer to a query - only then the most popular page gets top spot.

I like the concept of a semantic web, but frankly, I don't like its present trend of implication. It seems so "chunky" (metacruft), and still has to be managed by humans if it hopes to attain any level of accuracy.

If we can't mimic human thought, perhaps we can make a search method that can take into account the results of its reasoning. Boolean searches are quite powerful - why not develop a system along those lines? With added functionality - say, "bob" -5 "Ralph" would turn up pages that have those two words within five words of each other, with results ordered by relevance (matching boolean '-5', matching boolean 'AND', matching one word, etc.). Have the boolean markup generated by a GUI, if you will. As for prices, I'm sure these could be recognised by any search engine if it is programmed correctly.

Even this solution does not seem "complete" to me - somehow we're going to have to find a way to recreate human reasoning (to a certain level) before we can have a semantic web that is of any widespread (www) use to anyone.

Re:Yawn... (1)

tm2b (42473) | more than 6 years ago | (#22375520)

The point is that sophisticated enough tools can help you find the websites that do have something useful to say.

The amount of garbage out there only makes these tools more necessary.

Re:Yawn... (4, Interesting)

daigu (111684) | more than 6 years ago | (#22375954)

I'll tell you why you need it. It provides another layer of abstraction. Let's try a few illustrative examples.

1. Let's say you work for a Fortune 500 company and you get over 10,000 emails a day from customers complaining. Do you think it is better to read each one or have a tool that abstracts it to graphically display key concepts that they are complaining about so management can do something about it today?

2. You are a clinical researcher in Cancer and have a terabyte of unstructured patient data. Can you think how text descriptions of pathology reports might be displayed graphically against outcomes to suggest some interesting insights?

There's a lot of useful information that isn't on blogs - although it would be useful for them too. You need to exercise a bit more imagination.

   

Re:Yawn... (1)

Brandybuck (704397) | more than 6 years ago | (#22376078)

You will need it because it will take far more than porn downloads to fill up the harddrives of tomorrow. Indexed links between every word in every file to every other word in every file will take care of that nagging empty space.

Re:Yawn... (1)

Arancaytar (966377) | more than 6 years ago | (#22377726)

People used to write web pages.
Now they write software to write web pages.


We also have software to write software (see [[Compiler]]). Now that is just lazy and decadent.

Great, just great ... (4, Funny)

ScrewMaster (602015) | more than 6 years ago | (#22375096)

Semantic Web Getting Real

Just what we need. Yet another version of RealPlayer.

you're not the only one who misread (1)

Laebshade (643478) | more than 6 years ago | (#22376316)

I misread it as "Symantec Web Getting Real" and I was like, "wtf? The maker of Norton's website is buying Real?"

Oblig. Matrix (1)

SeaFox (739806) | more than 6 years ago | (#22377102)

Semantic Web Getting Real

"If real is what you can feel, smell, taste and see, then 'real' is simply electrical signals interpreted by your brain."

Advertising..... (0, Offtopic)

IHC Navistar (967161) | more than 6 years ago | (#22375112)

If online news outlets cut out the advertising promos that precede every video news clip, it would be a million times more popular that it already is.

I mean, nobody wants to see an advertisement tht is twice as long as the video clip itself. People will especially be turned off when they realize they took the time two view a 30 second mattress or advertisement just to view a 45sec-1min news clip about a story that is either boring, uninformitive as print, or just plain crappy.

Advertising is the Black Plague of all media. It's consumer repellent ability can't be denied, and the number of good ideas that have been ruined by ads is unimaginable.

Wordpress Plugin (0)

Anonymous Coward | more than 6 years ago | (#22375120)

I think the bounty for a word press plugin is a neat idea. Having seen how poor the performance is for existing tag suggestion tools for word press, maybe calais can do a better job?

pfft... (3, Funny)

djupedal (584558) | more than 6 years ago | (#22375178)

"Wenig made some good points about the end of the latency wars..."

Mr. Wenig must not be all that familiar with /.'s 'editorial' habits :\

Oops... (1)

Aaron5367 (1049126) | more than 6 years ago | (#22375272)

The first time I read the title, I thought it said 'Symantec Getting Real'. Well, I was planning to leave a smart comment about Symantec and Real don't belong in the same sentence.

In case you have no clue what they're talking abou (4, Informative)

WK2 (1072560) | more than 6 years ago | (#22375278)

If you are like me, and have absolutely positively no dang fucking clue what the summary is talking about: http://en.wikipedia.org/wiki/Semantic_Web [wikipedia.org]

According to the Wikipedia history, this concept has been around since at least 2001.

Re:In case you have no clue what they're talking a (1)

InsurgentGeek (926646) | more than 6 years ago | (#22375338)

Ummh, I think that's the point. The concept - first advocated by Tim Berners Lee - has been around for a long time. The technology to make it real has not. This is a big step in that direction. It's not the whole answer - but services like this will help overcome one of the key constraining factors: ubiquitous metadata tagging of content.

hype, waste of time, or big mess (3, Interesting)

globaljustin (574257) | more than 6 years ago | (#22375818)

the wiki article you linked to says:

For example, a computer might be instructed to list the prices of flat screen HDTVs larger than 40 inches (1,000 mm) with 1080p resolution at shops in the nearest town that are open until 8pm on Tuesday evenings. Today, this task requires search engines that are individually tailored to every website being searched. The semantic web provides a common standard (RDF) for websites to publish the relevant information in a more readily machine-processable and integratable form

On first read, I like what they are trying to do, but I see so many problems with what they are thinking, and I am not a web designer in any sense.

First, I don't have a problem finding things to buy on the internet. The problem is, signal to noise ratio. There are TOO MANY google results for something like 'plasma tv.' No matter what kind of RDF is used, it will be abused by people who want their URL to show up in your search for whatever reason. I think someone touched on this earlier a little in this thread, but it deserves repeating.

Second, can you imagine a scenario where, say, best buy or fry's uses some 'semantic web' application to do real time web searchable updates of their inventory? That's what would have to happen for this to work, and do something that isn't already possible.

Right now, I can search for 'plasma tv' in google or ebay. Then I can call my local retailers to see if they carry that item, and have it in stock. In order for this system to make any kind of tangible change in the example given, retail chains would have to update their inventories online, whenever a purchase is made, or new items delivered to the store.

It's an interesting idea. I wonder if the retailers would go for it? All it means for them is fewer people comming into their stores...sounds like that would hurt sales.

I also hate internet hype. I really fouls things up, more than some want to acknowledge. I try to keep my 64 year old dad educated enough to buy coffee beans on ebay, check email, look at news, etc. Every time he sees 'symantic web' or 'web 2.0' in the media, it just confuses him, and I imagine, people like him who just use the net for basics like online bill pay, ebay, etc. He doesn't need a new buzzword to motivated to shop online or whatever.

he has the motivation already...silly contrived 'new meida' buzzwords just waste time and confuse people

Re:hype, waste of time, or big mess (1)

mdwh2 (535323) | more than 6 years ago | (#22379330)

It's an interesting idea. I wonder if the retailers would go for it? All it means for them is fewer people comming into their stores...sounds like that would hurt sales.

You might as well the same thing about the Web though - why would all these companies go to the trouble of having websites, especially if it means fewer people in their stores?

Because it means more sales. And sales with fewer people in the stores is a good thing - less costs.

Not the Semantic Web (5, Insightful)

timeOday (582209) | more than 6 years ago | (#22375290)

IMHO this is not the semantic web. The primary representation is still (just) natural language. Anything in addition to that is really just search engine technology under a different banner. Is that a bad thing? No! I've always said the semantic web was bound to fail because people don't want to spend a lot of extra effort tagging their information so others can slice and dice it; instead, the evolution of natural language processing in search (rather than manual tagging) will solve the problem. Maybe the Reuters idea of exposing the "inferred" metadata will be useful (as opposed to normal searches like google who simply keep this metadata in their own indices), though as yet I don't see why.

Not the Social Web (1, Insightful)

Anonymous Coward | more than 6 years ago | (#22376254)

"No! I've always said the semantic web was bound to fail because people don't want to spend a lot of extra effort tagging their information so others can slice and dice it"

And yet we have social sites.

natural language processing in search? (2, Interesting)

pbhj (607776) | more than 6 years ago | (#22378282)

timeOday >>> "evolution of natural language processing in search (rather than manual tagging) will solve the problem"

But then if you're creating an addon for joomla (or any template elements really) to display event listings why not add a semantic tag so that a search engine could limit the domain by "tag:events". The extra effort involved is pretty minimal, especially when, if you code well, each event is probably in a "<div class="event eventtype"> ..." anyway.

Once people realise that search engines can do semantic filtering then it will be worth it.

As for tag-spamming well surely google, et al., won't accept based on tag first but will do their usual contextual/ quantative analyses first and then limit based on tags. So we wouldn't be gaining any spam over what we have now?

Why can't AI get the semantics from the plain text (2, Insightful)

presidenteloco (659168) | more than 6 years ago | (#22375292)

When you start aggregating as much text as google does, the semantics just starts popping out, in the form of word relationship statistics.
The massive corpus size, when measured carefully, acts to filter semantic signal from expressive difference "noise".

Combine that kind of latent semantic analysis of global human text with conceptual knowledge representation and inference
technologies (which would use a combination of higher-order logic, bayesian probability, etc) and it should be possible to
create a software program that could start to get a basic semantic understanding of documents and document relationships
in the ordinary "dumb" web.

Could the proponents of the semantic web please tell me what it will add to this?

My basic proposition is that if an averagely intelligent human can infer the semantic essence (the gist, shall we say), of
individual documents, and relationships between documents on the web, why can't we build AI software that does
the same thing, and then reports its results out to people who ask.

Re:Why can't AI get the semantics from the plain t (1, Insightful)

Anonymous Coward | more than 6 years ago | (#22375388)

[...] if an averagely intelligent human can [do X], why can't we build AI software that does the same thing [...]
Because wetware is still ahead of machines in a few domains. Be thankful for that because when we can build AI software for everything, we won't be needed anymore.

Re:Why can't AI get the semantics from the plain t (2, Insightful)

The Master Control P (655590) | more than 6 years ago | (#22376066)

Why should I be thankful about spending my adult life working because machines aren't up to the task? I'll be thankful when machines take the work and leave us free to do what we want.

Re:Why can't AI get the semantics from the plain t (1, Insightful)

Anonymous Coward | more than 6 years ago | (#22376604)

I really don't see that happening. The transition to this sort of economy is basically where the problem is now. As human labor is replaced by robotic arms in factories, those employees are left to find another job. Only, their entire skill set has now been replaced, so they are back to square one... They don't receive pay for the rest of their lives just because their job was replaced with a machine that does it better.

Re:Why can't AI get the semantics from the plain t (2, Informative)

msuarezalvarez (667058) | more than 6 years ago | (#22375642)

Could the proponents of the semantic web please tell me what it will add to this?

Actually, the story is about a tool which does (a part of) what you are describing.

In all seriousness... (1)

v(*_*)vvvv (233078) | more than 6 years ago | (#22376052)

It is because our best AI is still extremely stupid compared to even a dumb dog.

In reponse to:

My basic proposition is that if an averagely intelligent human can ... , why can't we build AI software that does
the same thing, ...

Re:In all seriousness... (1)

Gazzonyx (982402) | more than 6 years ago | (#22376868)

It's not a function of stupidity, it's a function of the limited fanout factor of a computer. The brain has a fanout factor of 10,000 whereas a computer has a fanout factor of 10, IIRC. Our mind can grasp details, isolate them and compare them to other 'things' (experiences, objects, people, sights, sounds, etc.) without explicit instructions to do so, whereas a computer cannot (and I highly doubt ever will) do this. This is as I understand it from talking to someone, somewhere down the line - please correct me if I'm off base.

Because "AI" is a misnomer (2, Informative)

melted (227442) | more than 6 years ago | (#22377328)

There's no more "intelligence" in AI than in a can of Campbell soup. It's basically statistics, linear algebra and (sometimes) handcoded rules for reasoning. It doesn't evolve. It doesn't build upon what it "knows". It has no self-awareness or consciousness and its reasoning capabilities, if present, are extremely weak compared to even children.

We're so early in the development of this field that no one can even define what "self awareness" or "consciousness" really is, let alone how to create it or scale it. Folks try. There's Cycorp, there's Powerset, there are a lot of people in academia who work on NLP, Machine Vision, classification, neuroscience, etc. There is, however, no unifying vision or theory/understanding what is it we're trying to build, and the current methods have nothing in common with "intelligence" per se. They do learn, in a sense that they figure out the hidden structure of a given set of data by approximating it using a mathematical model. Even though this model sometimes closely matches what a human brain does (e.g. in multilayer neural nets), they don't come anywhere close to what one would call "intelligence". What they lack is scale (and speed), and advanced cognitive mechanisms required to become self-learning.

It's also interesting to note, that at this point humans know on a high level how their brain works. Neocortex is a six layer neural net with links going cross-layer and neurons organized into columns. Trouble is, there's hundred billion neurons. We sorta know how vision works, too. Trouble is, we can't work with it in real time (because, naturally, you'd need a chunk of those hundred billion neurons). Heck, even human language is a pain in the ass if you don't have advanced cognition (AKA strong AI), with ability to understand euphemisms, sarcasm and idioms, paraphrase, generalize and specialize. Heck, even anaphora resolution is not solved yet (i.e. what does he/she/it in the current sentence refer to in the previous text). It's as if you had a bunch of parts and no manual and someone asked you to assemble a spaceship out of what you have, warning you that some parts are broken and may require you to make your own replacements. Without blueprints. Blindfolded. With your hands tied behind your back.

I do believe that in 50 years we will have strong AI, though. I work in a science lab, however, and many researchers don't share my optimism.

Re:Why can't AI get the semantics from the plain t (1)

semanticsearch (1157807) | more than 6 years ago | (#22378520)

Actually, NLP software does generally use those statistical methods. RDF is a storage and sharing mechanism - that's the big deal.

OpenCalais (3, Funny)

lenzg (1236952) | more than 6 years ago | (#22375516)

Finally, Reuters released OpenCalais as free open-source software. OpenDover will appear any time soon. (someone may then connect both using a Channel, SSH perhaps)

Re:OpenCalais (1)

Zoxed (676559) | more than 6 years ago | (#22377052)

> (someone may then connect both using a Channel, SSH perhaps)

Trains-on-rails, tunneled, would be the most secure: less chance of someone seeing your bytes ferried across, and a man-in-the-middle attack would be much more difficult !!

Really real this time? (0, Flamebait)

jfengel (409917) | more than 6 years ago | (#22375556)

The best indicator of vaporware seems to be continual postings on Slashdot that something is real.

Given that the Semantic Web is neither Semantic nor Web, I think we've got another data point for that theory.

Re:Really real this time? (1)

msuarezalvarez (667058) | more than 6 years ago | (#22375652)

Dude, you forgot the ending `Discuss'.

Kids...

Vaporware? (0, Flamebait)

TheBrutalTruth (890948) | more than 6 years ago | (#22375696)

Uhh, maybe we need to get rid of the tags, Slashdot. Or get rid of the ignorant assholes who tag erroneously. Or those who tag things that exist (RTFA!), as vaporware, intentionally merely because they don't like/agree with the idea.

Probably easier to get rid of tags...


Re:Vaporware? (2, Insightful)

smurgy (1126401) | more than 6 years ago | (#22375948)

I noticed that too... I was looking at the tags to provide an example of what machine-created tagging has to go up against to beat human tagging for a rant up above. I guess I have to thank that idiot for proving my point. Humans do hostile tags, they haven't yet written a subroutine to make a machine act like a jerk.

Confusing terms. (1)

v(*_*)vvvv (233078) | more than 6 years ago | (#22375862)

The semantic web refers to a specific attempt/vision put forth by w3c.

http://www.w3.org/2001/sw/ [w3.org]

This article is about a news organization using semantic tools to help extract and manipulate certain data. Sure, they are related a little maybe, but if related meant equal, then every computer would break.

Just because the word "semantic" matches, they've confused the two domains, and if humans can't even do it, I wonder what our automated semantic web would look like with robots trying to make connections. I cannot even begin to imagine how hackable that would be.

Re:Confusing terms. (1)

Joosy (787747) | more than 6 years ago | (#22377144)

Regardless of whether or not this is the "real" semantic web, the concept will never fly until they rename it. Most people don't grasp what semantic is supposed to mean in this context, but if they called it something like the data web then the lightbulb would click on for a lot more people.

"Free" for "anyone"? Not so fast. (2, Informative)

janbjurstrom (652025) | more than 6 years ago | (#22377284)

Reuters just opened access to their corporate semantic technology crown jewels. For free. For anyone. Their Calais API lets you turn unstructured text into a formal RDF graph in about one second. ...
It's "free" for "anyone" for loose definitions of the terms. Glancing at their terms of use [opencalais.com] (emphasis added):

You understand that Reuters will retain a copy of the metadata submitted by you or that generated by the Calais service. By submitting or generating metadata through the Calais service, you grant Reuters a non-exclusive perpetual, sublicensable, royalty-free license to that metadata. From a privacy standpoint, Reuters use of this metadata is governed by the terms of the Reuters and Calais Privacy Statements.
So you pay with your metadata. One can say you're doing that with Google too. Nevertheless, that's not entirely free.

Also, it's not yet for "anyone." According to the Calais roadmap [opencalais.com] , only English documents are accepted: "Calais R3 [July 2008] begins ... to incorporate a number of additional languages... Japanese, Spanish and French with additional languages coming in the future."

A Little too Cynical (4, Insightful)

Gregory Arenius (1105327) | more than 6 years ago | (#22377296)

I understand being jaded about internet hype and buzzwords but I'm still surprised that after nearly eighty comments there doesn't seem to be anyone who has anything to say other than "vaporware" and "it won't work because of the spammers." Yes, maybe it has been overhyped and yes it is taking a while for the envisioned ideas to come to fruition but that doesn't mean that those ideas aren't worthwhile.

I'll use the following example because I recently had to do this with non semantic tools. Lets say you wanted to see how good or bad a job a transit agency is doing in its city in comparison to other similar cities. A couple of metrics you might use to find similar cities would be population size, population density and land area. Google doesn't do a good job with something like that. You end up needing to search for cities individually and then finding their data points. Or you can find a list of cities ranked by population or population density. If you search on Google for something like that you end up at one of the Wikipedia lists. These lists are helpful but....still lacking. They don't contain all the cities you need or they don't provide a way to look at multiple data sets at the same time. The lists are also compiled by hand and aren't automatically updated when the information on the city page is changed. The data is in wikipedia though. Every city page lists that information in a little box near the start of the article. But how do I take this data that is in Wikipedia from the form that its in into a form that I can use to find what I need to know? Enter the semantic web.

Lets say that wikipedia, or at least the parts dealing with geography, were semantic. Now, there are tens of thousands of pages describing countries, regions, states, counties, parishes, cities, towns and villages. Then those pages are translated into many other languages. Some of the data that these pages contain is of the same type . They all contain the name of the locality, latitude, longitude, size, population size and elevation. For data such as this it would be pretty easy to have a form to enter the data into as opposed using the usual markup and the form could put the data into the proper markup for the page and the proper RDF. Once the data is in proper RDF form it would be easy to automate the process of updating translations of that page with the new data as well as updating any pertinent lists. It would also make it easier for people who want to analyze or use the data because they would be able to access it much more easily.

But nobody really wants machine readable access to this information, you might say, except for the random geek and researcher. I would disagree. Lets say you're using a program like Marble which is similar to Google Earth in some ways but is completely open source. If they wanted to display the population of a city when you hover over it they would currently have to create and maintain their own dataset or they'd have to write a parser to extract it from wikipedia. Neither of those options is particularly easy at the moment but if the information was in semantic form on wikipedia it would be a piece of cake.

The strength of the semantic web isn't, in my opinion, going to be AI like personal agents or anything like that. It'll be things that in many ways are already here. Like Yelp putting geotags on the restaurants they reviews and apps like Google Earth taking that data thats available in machine readable (Semantic!) for to overlay that data on a map so that you can see whats nearby. It'll be applications doing the same with the geotags from flickr. Its really useful mashups like http://www.housingmaps.com/ [housingmaps.com] . Its the transit agency putting realtime bus data up in semantic form so you can see on your iphones google map how far away the bus is. So yeah, maybe the semantic web is overhyped but that doesn't mean there isn't a lot of substance there, too.

Cheers,
Greg

Re:A Little too Cynical (1)

CSLarsen (961164) | more than 6 years ago | (#22377636)

Just a simple thing like right-clicking on a dollar amount in your browser and choose "Convert to local currency" would be a huge improvement of what's already available. Or being able to have your browser dynamically recognize dates and format them from American to European format, client-side.

Vapourware my arse (4, Insightful)

theno23 (27900) | more than 6 years ago | (#22377424)

The company I work for, Garlik [garlik.com] has two products that are run off semantic web technology. DataPatrol [garlik.com] (for pay) and QDOS [qdos.com] (free, in beta).

We use RDF stores instead of databases in some places as they are very good at representing graph structures, which are a real pain to real with in SQL. You often hear the "what can RDF do that SQL can't" type arguments, which are all just nonsense. What can SQL do that a field database, or a bunch of flat files can't? It's all about what you can do easily enough that you will be bothered to do it.

A fully normalised SQL database has many of the attributes of an RDF store, but
a) when was the last time you saw one in production use?
b) how much of a pain was it to write big queries with outer joins?

RDF + SPARQL [w3.org] makes that kind of thing trivial, and has other fringe side benefits (better standardisation, data portability) that you don't get with SQL.

I guess it shouldn't be a surprise to see the comments consisting of the usual round of more-or-less irrelevant jokes and snide commentary - this is Slashdot after all - but I can't help responding.

Jane Jones has a developer key (1)

dugeen (1224138) | more than 6 years ago | (#22377698)

I clicked 'here' for a developer key and was told that it had been despatched to jane.jones@gmail.com. Good news for Jane Jones.
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>