Categories
Latest News

Bighand launch server side speech recognition

At the company's EMEA user conference in London today, Bighand launched its new Server-based Speech Recognition module aimed at organisations looking to further streamline administrative overheads in the current climate, or support an upturn in work as the economy recovers. The new voice-to-text module also supports voice submission from Blackberrys and smartphones, alongside traditional dictation hardware. The Bighand server-side speech recognition add-on is part of the latest v3.3 release of Bighand’s digital dictation workflow software.

The speech recognition module (which utilises the Dragon NatuallySpeaking Speech Server from Nuance Communications) includes both a speech recognition only option, where the Transcription Server sends the resulting text back to the author, as well as a speech recognition with proofing option where the Transcription Server sends the resulting text to a secretary.

The secretary performs any corrections, saves the document and then submits a copy of the corrected document. Once submitted the server-based speech file of the original author is upgraded, improving their results going forward. The author can still correct their own speech file if so desired. Moving the correction of the speech file to the secretary completely removes the time burden from the author and enables no change to their current working practices. As a result, even though there is minimal training, the author’s speech file continues to improve over time.

In testing the Transcription Server returned a 233 word initial recording with 178 words correctly recognized (76.4%). After the first correction, the transcription server improved to recognise 226 of the words (97%), and by the third submission the server based system recognised 232 words (99.6%).

The overall total cost of ownership of the new back-end or server-side speech recognition module is estimated to be as little as one third the cost of existing client-side/desktop speech recognition solutions. The module is licensed per user or per server, with minimal training required, no client-side installation and, subsequently, no loss of billable time. Additional features include Citrix & Terminal Services compatibility, document templates & bookmarks, acoustic modeling, vocabularies configured in Bighand System Administration, default and custom speaker profiles for different users of Bighand, and support for distributed Transcription Servers.

Aside from the Server-side Speech Recognition module other new features within BigHand 3.3 include:
•    Document attachments & links within the workflow
•    New search engine & User Interface improvements
•    Support for Windows 7 & Windows Server 2008 R2
•    64-bit support extended to incorporate Terminal Client
•    Splitting of voice files & extended external sound file support
•    New & extended API for 3rd party developers

11 replies on “Bighand launch server side speech recognition”

Perhaps we might finally have a system that can respond correctly to:
“Open the pod bay doors Hal”

This is an interesting development. A couple of questions arise though. Firstly what is one third of the normal price of desktop speech recognition in real money? When I have looked in the past SR has always seemed prohibitively expensive for something I can't guarantee will be consistently successful for me. Secondly, if I read it right the accuracy test was done by submitting the same dictation 3 times. It would be good to see how it faired with 3 different dictations which is the way I work as a fee earner. If the accuracy is consistently 99.6% or close to that after the third dictation and thereafter that would be awesome.

I have been using a 'server based' system like this for 4 years from another supplier with integrated document creation and matter management and all of my dictations are 95%+ accurate without me having to process them several times.

Just to clarify – with the new Server-Based system you don’t need to submit the same dictation more than once, processing it three times in testing was simply to try to give an illustration of how quickly you can go from having never used the system to high levels of accuracy (and the clearest way to demonstrate exact percentage improvements is by gauging it against exactly the same dictation). It is our experience that submitting three completely different dictations achieves similar results and is actually better in terms of building your acoustic profile. So as a new user, once you have corrected and submitted your first three dictations (ever) then subsequent voice-to-text conversions are likely to be achieving in the high 90s in recognition rates.
A live demonstration at the BigHand User Conference yesterday, where a randomly selected new user from the audience was set-up in front of 200 delegates (and the ambient noise that creates), took 10 minutes and returned 100% accuracy using a BlackBerry after only the second dictation the new user sent. The spontaneous applause that followed suggests the test results we listed may not actually do the power of the new Server-Based system justice!

Does this mean Bighand's solution is 1/3rd of these prices?

So how is this better than back-end speech recognition modules supplied for some years by the other Digital Dictation suppliers?
Do you also do integrated front-end speech recognition like most other vendors?

Comments are closed.