|
title: Put the knife down and take a green herb, dude. |
descrip: One feller's views on the state of everyday computer science & its application (and now, OTHER STUFF) who isn't rich enough to shell out for www.myfreakinfirst-andlast-name.com |
FOR ENTERTAINMENT PURPOSES ONLY!!! Back-up your data and, when you bike, always wear white. |
| Monday, July 18, 2016 | |
|
Why is Apple's incredibly cautious, extremely limited rollout of the Siri API for 3rd party apps for developers bad news? It's simple: Apple's limited Siri API hurts voice recognition in apps because no third party is going to provide developers with a more flexible one. Alexa just gives you text to parse. This is perfect. Let me screw it up. It's my app. Apple, on the other hand, limits Siri to a few "Supported Domains and Intents".
Wow, that limits voice recognition. I'm hopeful that this is just the beta, essentially, but if they continue to force all voice recognition into their own backend, and will only give you the results in some sort of logical flowchart, Alexa and Google are going to hand Apple its hat with digital assistants. I understand that this allows us to skip translation, but is that really that big a deal? Smart folk are internationalizing (there's a better word for that; sorry) their apps now, translating labels and other text throughout. I also get that Apple might do a better job with grammar, so that there are lots of ways to get across a command in natural speech rather than forcing a strange, app-specific grammar. I don't feel that's a big win. If I have to speak to Overcast, my podcast manager, like, "Overcast, start playlist Sports" and can't say, "Hey, play sports on Overcast," that's fine by me. Labels: apple fail, siri, voice posted by ruffin at 7/18/2016 10:09:00 AM |
|
| 0 comments | |
|
|
All posts can be accessed here: Just the last year o' posts: |
|||||||||||||||||||||
| ||||||||||||||||||||||
|
|
|
|