Helioid Team Launches First Snaggle Demo

Tuesday, March 22, 2011

Update: we’ve added icons distinguishing each category.

The Helioid Team is currently developing an online bartering service, named Snaggle, which extends the general philosophy we’ve taken to web search to shopping and trading online. The two aspects of web search we’ve primarily been developing Helioid to address and improve upon are web exploration and user-specific adaptive search. With Helioid, we are taking the next steps in web search by connecting users with the web content that best fits their specific interests, even when the user is engaged in relatively undirected exploratory search. In all of our projects, our overarching goal is to make users’ online experience more immersive. Obviously, in the case of Helioid, that means equipping the user with search tools that render online content more intuitively explorable, in the way that one would explore collections of books in a library. In the case of our next-generation user interface project, IoImmersive, that means developing ways for users to manipulate and organize items on their computer in three dimensions, in ways similar to how they’d organize physical objects in their home. In the case of our nascent online bartering service, Snaggle, that means providing users with more intuitive, comprehensive representations of the space of available transactions online, while simultaneously relieving the user of as much of the work load as possible in navigating that space, as well as incorporating Snaggle into users’ social networks in order to turn the search for stuff into a fun, communal activity. A screen shot of the demo is shown below.

Snaggle is a bartering site, through which users can buy, sell, or trade items online. Users will be able to create lists of things they want and things they have, and Snaggle will recommend related items that the user is likely to be interested in. Clearly the exploration of the online trading community facilitated by Snaggle opens up a whole new world of possible ways for users to get the stuff they want. And as an added bonus, online trade increases the life span of items that might have otherwise been thrown away if their owner was unable to sell them, which means trading on Snaggle reduces waste and helps the environment.

Of course, there are some online bartering services in existence. Several young sites like U-Exchange, Tradeaway and Swap.com are specifically geared towards the trade of goods and services online. Furthermore, that seasoned, stalwart facilitator of finding random stuff online, Craigslist, has a bartering section on which users can list things they want to find and things they’re willing to trade away. But what these existing services are lacking is the very thing that we have tried to exemplify in all of our projects: an immersive online experience. Using Craigslist’s barter section, you can search and peck by city for currently available trades of interest, but

  • you can't search for trades over a broader region
  • you can't submit a list of trades you'd be interested in which would return relevant trades as they become available
  • you can't easily explore the trades related to a particular item or set of items if said items are not themselves available in the network.

Although the aforementioned online trade sites attempt to facilitate these activities to varying degrees of success, none of them quite hits the nail on the head. What’s more, these services fail to fully integrate with users’ social networks, when there is so much potential to leverage user activities on social networks to facilitate cooperative exploration of the space of available trades. By more effectively getting users to talk about interesting possible trades, online barter sites would gain access to a huge resource of trusted product recommendations, while making the process of finding transactions of interest online more fun, community oriented, and richly immersive.

Check out the demo of Snaggle’s map feature at snaggle.gnoskos.org.

Search Visualized

Thursday, February 10, 2011

When Helioid launched in 2008, the field of search startups was peppered with other search and metasearch engines that used various forms of information visualization to facilitate search refinement. From Quintura’s concept maps to KartOO’s Venn-reminiscent topic diagrams to Grokker’s hierarchical topic clusters, there seemed to be a thriving community of entrepreneurs who agreed that the time was nigh to move beyond the traditional strictly linear format of displaying search results. Of course, we at Helioid were and continue to be staunch supporters of this frame of mind. However, it seems as though the field of visualization-based general web search engines has been thinned a bit. KartOO and Grokker have both folded and Quintura, after having transitioned from their concept map to a less complicated tag cloud, has faded significantly into relative obscurity. Of the few lauded visualization-based search services around when we were founded, it seems only Carrot2 and its commercial sister projects under the Carrot Search umbrella continue to power forward.

However, although there seems to have been a slight and unfortunate retreat from visualization of results as the next big thing in general web search, in the past few years a number of niche search and exploration engines have popped up, using various forms of information visualization. And a couple of them are doing quite well for themselves. Fastcase is a legal search engine that provides users with interactive maps and timelines of search results in order to allow them to dig deeper and more dexterously into the body of results and to gain a more comprehensive view of the spectrum of cases related to their query. Jinni is a movie and tv show recommendation engine, that lets users wander through collages of related shows in order to find programs they’d be interested in, even when they’re not really sure what they’re in the mood for. Both services have been enjoying explosions in popularity in the past year. Meanwhile, a number of fun little visualization tools have popped up for exploring your online social networks, such as Friend Wheel, Social Graph and Touch Graph, which also offers tools for exploring related items on Amazon and similar results on Google.

One of the most common criticisms raised against visualization based search engines is that linear search works because it’s easy and visualizations of results would likely confuse users. Our response was typically that for deep, more involved searches and for open ended, exploratory searches, the benefits gained from these visualizations outweighed any slight increase in initial user confusion (the extent of which can itself be largely remedied by well-designed, intuitive visualizations). Though the field of general web search engines using visualizations has thinned, the success of the niche search tools listed above vindicate our thinking on this matter, by demonstrating their success in search domains in which users are most likely to engage in the type of search activities we described. Fastcase has demonstrated the utility of visualizations for serious searchers who need to dig deep into the results and Jinni has shown how useful interactive visualizations can be helpful in obtaining satisfying results even when you’re not sure what you’re searching for. And the social network visualization tools, particularly Touch Graph, show just how fun, immersive and perfectly intuitive visualization-based search can be. The success of these niche services should precipitate further exploration of the use of visualizations in search.

Interaction Beyond 2D Environments with Cooliris

Friday, January 23, 2009

When comparing game play scenes from “The Legend of Zelda“:http://en.wikipedia.org/wiki/File:Legend_of_Zelda_NES.PNG released in 1986 with those from “The Legend of Zelda: Twilight Princess“:http://en.wikipedia.org/wiki/File:Zelda_-Twilight_Princess-_stab.jpg, release in 2006, the immediately obvious difference is dimensionality. The classic game takes place on a 2D grid, the modern game unfolds in 3D with particles systems, reflections, shaded textures, and more.

At Helioid, we strongly believe that the future of the world wide web is experiencing a similar transition from minimal 2D interaction towards immersive 3D interaction. The 1986 style of constrained navigation offered by current browsers is outmoded and outdated. The popularity of the Wii and iPhone demonstrate that if users are given improved alternatives to the classic styles of interaction they will make use of them. Immersive 3D environments are an improved alternative for web browsing.

In addition to our work at Helioid, Kenneth and I are both actively involved in the 3D immersion working group “IoImmersive”:http://www.ioimmersive.com/. At IoImmersive we work on developing a roadmap that will push interaction beyond 2D environments by integrating more dimensionality into user interfaces. Many of Mac OS X Leopard’s “features”:http://www.apple.com/macosx/features/, such as “Quick Look” and “Stacks”, involve overlay effects. Windows Vista’s “Flip 3D”:http://www.microsoft.com/windows/windows-vista/features/flip-3d.aspx?tabid=1&catid=4 uses “visual depth” to display open files and programs. The closer one looks the more movement towards a break from the straight 2D environment there is. IoImmersive proposes that 3D integration will go to the core of user interfaces and peripherals. From the browser to the operating system, the mobile phone to whatever replaces the mouse, we are headed towards immersion with virtual environments.

Cooliris’ browser plug-ins are an inspiring push towards immersive 3D environments. “Features”:http://www.cooliris.com/product/?p=features include 3D navigation, integration with various rich media and product search engines, as well as the ability to store favorites. It is lacking in many areas, but, in its nascent stages, the Cooliris browser provides a solid framework upon which great things can be built.

The “vision”:http://www.cooliris.com/cooliris-video.php presented by Cooliris board member Randy Komisar is just as encouraging as their products: “What would the world be like if rather than having to browse - if rather than having to pull up things in applications - what if we were able to navigate and find and share directly through the rich media that is now dominating the web.” Indeed, what would happen?

We would retrieve and arrange query results in a 3D environment. We would automatically reorganize items in this environment in response to user interaction. We would data mine users’ actions to find improved methods of presenting information to the user. We would significantly increase the efficiency and abilities of users.

CoolIris 3D

We would use newly proposed “3D standards based markup”:http://www.web3d.org/ and additional 3D handlers (e.g. ‘onrotate’, etc). We would integrate current “opensource”:http://www.blender.org/ 3D suites and SVG “editors”:http://www.inkscape.org/. We would finally have an excuse to use “spatial operating environments”:http://oblong.com/ for interaction.

Still, the above ideas barely scratch the surface of what we would do. With an intuitive 3D environment and standards based programming frameworks, the primacy of the operating system would fade. The hugely important component missing from web browsing is an integrated graphical user interface. Each web page has its own rules and implicit philosophy of user interaction. Ones mind is forced to readjust to these different rules and work within them. In general, there is absolutely no sense of continuity.

Shifting to a completely immersive browsing system would set information on the internet (web pages, emails, feeds, etc) within a larger ecosystem. Relations between objects could be made abundantly clear through visual cues. For example, I could open an email message and it would be centered in front of me with the pages linked to by the email, summaries of the key concepts given in the email, and collections of information on the sender and recipients set back to the sides (and perhaps above). If any of these related items interest me I can seamlessly bring them to primacy and the email recedes slightly.

The undercurrents of modern innovation hold a revolutionary concept in information interaction. Helioid is dedicated to promoting this revolution.

If Microsoft's Thumbtack was Intelligent

Saturday, January 17, 2009

In early December Microsoft Live Labs released “Thumbtack”:http://livelabs.com/blog/introducing-thumbtack/, which is said to: “[use] machine learning and natural language techniques to understand the information you give it.” Looking through the interface one notices some interesting tools. Such as a gadget that creates plots based on attributes of the items you collect and a “Layout Gadget” that I assume creates layouts but currently appears to only work with “IE7”:http://thumbtack.livelabs.com/FAQ.aspx. Intelligent parsing of information, on demand analysis, visualization, there are great ideas here. The unaccomplished obstacle is how to allow users access to these in an intuitive and simple fashion.

The improvements that could make Thumbtack a more useful tool include:

1. Automatically parse attributes in content added to Thumbtack.

bq. In a “FAQ video”:http://video.msn.com/?mkt=en-US&playlist=videoByUuids:uuids:fa6082f9-a8e0-4067-9c32-53ef1ae4ab42&showPlaylist=true&from=msnvideo it says that Thumbtack can automatically create properties from the attributes. How to make this happen is unintuitive and the video later goes on to say that the attributes for the automobile feature plots it creates have been added manually as name (key) value pairs. Having a properties gadget that actually presents an interface asking users to add properties as “name” and “value” pairs is fine in a debugger (or, in this context, as an advanced option) but this invites confusion and estrangement in a user interface.

bq. When a user adds content, Thumbtack should automatically parse that content into keys and values, as sets, or into another most useful representation (maybe a graph). If the parse gets things wrong the user should be able to change the values or labels, and the parsing engine will learn and hopefully know better next time. If the parsing engine’s at a complete loss it can prompt the user for interactive guidance.

bq. With this capability built in the information would be much more useful to the user. Because newly added information would be quickly integrated into existing information through the parsed metadata, the user would explicitly see the value of adding and correctly categorizing new information. Additionally, the boundaries and definitions of user created categories would likely become more clearly defined.

2. Integrate Thumbtack into web search.

bq. The Thumbtack interface appears to be completely divorced from web search. The interface has a text field labeled “search” in the upper right that does not search the web but instead searches over items already present in Thumbtack. One of the primary times that users need to categorize and keep track of web information is when they are conducting searches, whether goal directed or exploratory.

bq. Thumbtack could be integrated into Microsoft’s Live Search “API”:http://msdn.microsoft.com/en-us/library/bb251794.aspx so that search results are displayed in the Thumbtack canvas and incrementally parsed for properties as the users looks through the results. Search results brought into the interface could be compared, arranged, and saved all while learning algorithms run in the background to determine what new information and what types of organization would be most helpful to the user.

3. Give the Thumbtack “bookmarklet” more features.

bq. Another “video”:http://video.msn.com/?mkt=en-US&playlist=videoByUuids:uuids:6a905d98-0332-4c3f-8b25-75737cd9b675&showPlaylist=true&from=msnvideo demonstrates the “bookmarklet” that pops up over web pages so that while browsing you can add information into thumbtack. Unfortunately, this tool can only be used to add copied content, a title, and tags to a specific collection (items in Thumbtack are grouped into collections). Some things a user may want to do is add their content to more than one collection, assign some special properties to their content, maybe even share their content on the spot, or find things similar to this piece of content.

bq. In addition to giving the user more options in terms of how they treat the content they are adding, the tool would be very useful as an “inspector” palette for the current page. It could display information about the current page and show how it relates to items already in your Thumbtack collections. Microsoft Live Search data about the current content could be pulled up and the user could use this to explore similar content or narrow in on the current content.

bq. Users should also be able to use the Thumbtack’s gadgets from within this popup. Maybe they want to see a quick plot of this content integrated into their existing collections, or maybe they want to see how adding this content would change the layout of already existing content. Forcing the user to switch between to environments is unnecessarily burdensome.

Microsoft Thumbtack is an impressive tool with a lot of potential. If it was given a more intuitive interface and a clear direction it could be very useful. Sadly, it seems that after its release it’s slowly drifting into obscurity.