rss
 
comment(s)

archives
J|F|M|A|M|J|J|A|S|O|N|D
(20##) 10 9 8 7 6 5 4 3 2 1 0 <
 
DesktopWeb FormText   [book] The Sematic WebSun, 20 Mar 2005 23:26:50 GMT # 

(Wiley 2003) i've read another semantic web book before, and was not impressed ... but this one really jumped out at me. it was divided up into 9 chapters, with most chapters covering a specific technology. and it just so happens i like almost every one of the technologies : XML, Web Services, XML alphabet, RDF, RDF Schema, Taxonomies, Ontologies. the ultimate goal of the semantic web is to make the web into a machine readable knowledge base that intelligent agents can perform tasks with / on. the beginning chapters were mostly intro and repeat info, so i went through those fast. the one idea that i really liked out of those chapters was SWWS (semantic web web services). also liked the XML alphabet chapter because i have not been keeping up with the less adopted XML technologies like XPointer, XBase, and XLink. then it moved into RDF. i've read another RDF book before too, and didnt really get it at that time either. having read a # of books on expert systems and natural language processing since then ... this time i got it. and i could directly relate my experience with running a spider and trying to get semantic meaning from the presentation web (currently through a combination of regular expressions and some lame AI). after that, the taxonomy and ontology chapters did a great job of comparing and contrasting each against one another. there was also a little bit about predicate logic, which i happened to love from school. overall, i really liked this book.

now for my thoughts on the web. i definitely think the current web sucks, as far as machine processing goes. its great if i want to look at pictures of naked chicks, or teach myself some new topic. but its far from great if i want to make a bot to go and do some meaningful work for me. but what i'm currently struggling with is related to the battle between weak vs strong AI. e.g. strong AI is an expert system, where an expert codifies his knowledge into a form that a computer can then execute, while weak AI is a bottom up approach that is trained (or not) to find some form of meaning on its own (possibly through a neural net). an example is the spam world. a block list of IP addresses is to strong AI, as bayesian determination of spam would be to soft AI. now if we had a computer that could actually read and understand, then i would say the semantic web is useless ... but we dont. and when we do, it will probably only be able to understand text ... but what about pictures or videos? the video and image searches of today go off of surrounding text, and dont actually understand the images without the textual context. so my current thought is we need the semantic web until we can get intelligent agents that can read text. so however many years that takes ... beyond that, we need the semantic web to describe non textual resources on the web. so now we just need for the specifications to be fleshed out and tool support. because people are already half assing the semantic web with tagging.

now i wonder if this relates to the programming concept of domain specific languages? i have not read anything about DSL, but my uneducated guess is they are highly related to an ontology. i.e. the domain is represented by an ontology. so when i start some new programming project, i dont start modeling the domain on my own ... instead i go to some ontology library, and just pick the domain from there. then my domain objects could be used by other applications that know how to work with objects from that domain. seems like a perfect fit for SOA as well. but i have not actually read about DSLs, so this could be way off? and what is the MS position on this? i dont ever hear about the semantic web from the MS-centric blogosphere that i'm subscribed to. maybe MS research has something in the works ...