Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
User Journal

Journal mugnyte's Journal: Sensoral Templates for the Web

Ask a web-based bot to define something, and you're almost guaranteed to get an english paragraph, written in part or wholly by humans and sitting on a site, or in a cache found by a spider.

  Now, this itself isn't bad, and it's gotten us almost all the "reference" material one could ask for on certain topics. Images and videos are now emerging as the next target of the human search realm.

  Now, what if there were a series of extended descriptive templates anyone could fill out? These templates were a general format (like Unicode) that could be used to describe something for a particular sensory function. For example, you wanted to learn the shape of an apple, and a normalized, general format 3D mesh would appear. Also, there would be links to the development cycle, varieties, diseases, etc. For the most part, a "representative sample" for such an open-ended search such as "apple" returned a red 3D mesh, complete with descriptive qualities of the make up this mesh.

But then again, a mesh is really just a surface formula used to approximate a real thing. Perhaps a full-blown language is necessary to codify the whole apple, subsections, and so on. When this language was read through a "visual" template, it returned a picture of the apple, but enabling a 3D slicing/dicing of the object. When run through a "tactile" template, it returned something that could describe the touch of an apple. Smell, Taste, etc. Our computer-human interfaces are really only auditory and visual, so for now perhaps only that information gets focus.

Obviously, this language would be most easily adopted if it:

  • Used existing web technology to encode the proposed language
  • Was approachable through the use of tools and/or techniques that allowed anyone to contribute to the body of information
  • Was globally accessible and yet politically agnostic - not encoding into any regional format and not locked away from any set of users.

Proposals:

  - Extend HTML or a variant (perhaps through some of the open-ended tags/attributes for custom data) to encode the visual and auditory holdings of a concept or thing.
  - Provide a series of tools to encode information into this format. It should take in visual and auditory information to a greater degree than a simple picture.
  - Provide a tool to consume this information, enabling drill-into capabilities for the visual information, markups, etc.

These of course is sort of a Wiki with extended information. In fact, extended Wikipedia in this way would be a great undertaking. "Person" references could bring up a video of that person. This doesn't need much in the way of new technology. However, if "Object" references were more fully flushed out to include this Mesh concept, and a small java window to spin/slice/zoom the object, I think more folks would be drawn to it.

Candidate Technologies:
  - Generalized 3D meshes, or material composition languages. (POV-Ray's C-like syntax is a crude example)
  - Encoded sound in a compressed format, but transferable in both test-based and binary formats.
  - Certain medical imagery interfaces, whereby 3D images can be explored in a CAT-scan like manner.

This discussion has been archived. No new comments can be posted.

Sensoral Templates for the Web

Comments Filter:

Remember to say hello to your bank teller.

Working...