Four VUI Lessons from Social Web Design Pitfalls

I was reading an article this morning about things we can learn from Social Design. In particular, what I liked was the fact that they weren’t trying to analyze the typical examples – Facebook, YouTube, SecondLife, etc. – but rather learn from the mistakes of other less-known ventures.

In particular, there were four “Lessons Learned” that Joshua points out that I think apply to Voice User Interface design and any type of self-service application:

  1. Not attracting enough users (aka. not retaining enough users or having them choose other service alternatives such as a live agent)
  2. If you design and develop a great self-service system and nobody uses it (or opts out), you have this problem. What’s even worse, some companies think that they way to solve it is to make it hard for callers to talk to an operator, add more options, or spend more money marketing the system, which is the wrong move. Good applications (like social sites) build value one user at a time. As Joshua points out, “If one user finds value, then they’re much more likely to tell others or invite their friends.” Therefore, one-size fits all systems aren’t the answer. Instead designers (and businesses) should focus on succeeding on a smaller level, focusing on individual users and their needs. One very interesting point I loved was “One strategy in particular is to design for your friends, get the system working well for them, and then release it to a broader audience”. What would happen if we started to design our apps for our friends and families instead?

  3. Trying to do many things at once
  4. Too many times I’ve heard the casual customer request: “The system should provide the same functionality the old system had, plus the new features the website offers, along with some new initiatives we’ve been thinking about.” (I can already feel the goose bumps). Since any new design (or redesign) is considered an opportunity to ‘upgrade’, it is hard for businesses to understand that ‘enhancing’ it not necessarily a function of adding things but maybe removing things. Unless you focus on those things that customers really care about and need to be successful, callers will continue to hate the systems we design – no matter how ‘useful’ we consider them to be.

  5. Lack of Sustained Execution
  6. Most systems tend to be developed, rolled out, and then become static pieces of art – aside from the casual updates (e.g. changes in the hours of operation) or sporadic tunings (which tend to happen once a year). On the other hand, what makes Social Web applications so successful is the fact that they are in a continuous state of evolution, they keep changing and never stop getting better. As Joshua points out, “It’s too easy to fall into the desktop software mindset of build, release, and wait for the next cycle.”, but I truly agree with his comment about this being a mindset issue – “If you see it as an opportunity for continual improvement, your outlook will be more positive.”

  7. Pointing the Finger when Missteps Happen
  8. It may not be that apparent considering the current state of the speech industry, but reactions such as the SNL sketches, the Citi Simplicity campaign (“press 0”) and the GetHuman movements, provide hints that the consumers are starting to become much more vocal regarding their experiences and expectations regarding self-service ad automation in general. Therefore, we as “manager of these communities” must act accordingly, accept responsibility for our caller base and earn their trust and respect.

    Any others I might have missed?

2 thoughts on “Four VUI Lessons from Social Web Design Pitfalls”

  1. #3 is a big issue with speech apps. It’s not just that post-deployment tuning is essential. It’s the way apps can be failing and no one knows about it. Unless someone is monitoring completion rates, the system can be failing silently. I blogged about this at (http://silentsoftware.blogspot.com/2006/08/you-cant-tune-silence.html), and suggested a temporary tuning mode that adjusts the recognition thresholds to force more confirmations, allowing the app to log the rejected values.

  2. Absolutely. But like you said, the current tuning methods aren’t very efficient (compared to the almost immediate feedback you get from most website tuning tools) and require a significant amount of human intervention. But one of the things I suggest is maybe looking at other parameters that can be obtained automatically from the system (e.g. In-grammar rates, Accuracy rates, Confidence scores) and creating custom tracking mechanisms to measure whether a certain caller was successful or not at completing a certain tasks, so that the call flows themselves can dynamically adapt based on these results (results which can then be reviewed and analyzed during a full tuning)

Leave a Reply

Your email address will not be published. Required fields are marked *