Why is Apple's incredibly cautious, extremely limited rollout of the Siri API for 3rd party apps for developers bad news?

It's simple:

Apple's limited Siri API hurts voice recognition in apps because no third party is going to provide developers with a more flexible one.

Alexa just gives you text to parse. This is perfect. Let me screw it up. It's my app. Apple, on the other hand, limits Siri to a few "Supported Domains and Intents".

currently supported Siri \

Wow, that limits voice recognition. I'm hopeful that this is just the beta, essentially, but if they continue to force all voice recognition into their own backend, and will only give you the results in some sort of logical flowchart, Alexa and Google are going to hand Apple its hat with digital assistants. I understand that this allows us to skip translation, but is that really that big a deal? Smart folk are internationalizing (there's a better word for that; sorry) their apps now, translating labels and other text throughout. I also get that Apple might do a better job with grammar, so that there are lots of ways to get across a command in natural speech rather than forcing a strange, app-specific grammar. I don't feel that's a big win.

If I have to speak to Overcast, my podcast manager, like, "Overcast, start playlist Sports" and can't say, "Hey, play sports on Overcast," that's fine by me.

Labels: , ,