So this is from a few weeks back, but still worth a mention. For the month of February, I worked with a small group (Martha Narro, Ravi Palanivelu, Nirav Merchant, Salika Dunatunga, and my advisor Kobus Barnard) to develop a good demo of iPlant's Bisque platform in action. Martha and I worked hard writing a clear, focused tutorial, reminiscent of the "lab protocol" format that Ravi recommended as a good way to speak to biologists in a way that builds on their existing knowledge.
During that same period, I made a large number of minor changes to tweak and streamline the Pollen Tube Tracker. For example, the input parameters were rather cryptically named, at least from the point of view of a plant biologist. I added more help text and altered the wording to communicate their meanings better. One instance of that is the "spot size" parameter. It was formerly specified in pixels, and instead we changed it to microns, and the module extracts the pixel-to-micron conversion factor from the image metadata. Sometimes these changes have unanticipated consequences. When I made the above change, soon the tracker started to take days and days to run. Why was that? It turns out if the user sets the minimum spot size smaller than the equivalent of 2 pixels, then spot detector goes bonkers -- it sees one-pixel spots everywhere. That's not really a problem for the spot detector, but its excessive output consequently clogs up the tracker.
So that's why user testing is important. Research people like me tend to write interfaces for a single user (oneself), and it's a bit humbling to have to face again the challenge of HCI, when the user cannot literally read (i.e., share) the mind of the developer.
The goal was the March 2 RCN meeting, and iPlant had, I think, a good showing. I think all the iPlant representatives talked to a ton of people interested in various computing requirements. Many of the questions I received had the common theme of "That looks similar to what I need, but . . ." and in all cases, the specifics of user data, or image models, were very significant. One colleague of Ravi's had color images, and wanted to track both pollen tube tips and pollen tube nuclei. Another wanted to track locally-linear structures (actin fibers) that are moving in all kinds of crazy ways: squirming, twisting, growing and breaking. I bet in 100 years these sort of adaptations will be easy, but we aren't there yet.