mSpaces, from our deployment of them in CS AKTive Space through to the IMDB browser, seem to demonstrate what HCI research has demonstrated repeatedly: having contextual information persistently spatially available assists processing information.
Some of the questions to pursue about the model relate to not just processing information, but accessing information: do mSpaces help people access information better than other extant methods of presentation of hypertexts (like the Web)? How could that be measured?
For instance, could an mSpace of a domain - such as classical music - help people who know little about classical music (other than that they know what they like when they hear it) begin to explore that domain? We've begun to look at this question of better support for access by adding a feature called a "preview cue" to an mSpace.
In terms of visualization, mSpace has used one main approach: a multicolumn browser. The mSpace model itself is not confined to this visualization. Others are possible. What are other effective ways to represent some of the other aspects of an mSpace polyarchy? For instance, how might other not currently selected chttp://www.movabletype.org/images/clean.gifategories be presented/visualized? How might these approaches work on small screen or no screen portable or in-transport devices?
One of the questions we've been looking at is how can we package an mSpace model into an api that can be applied to an ontology so that an mSpace is produced which supports the full functionality of an mSpace? The current first step is the mSpace Framework Software to be released shortly on SourceForge, but we are still refining and expanding the technology.