I'm aware you have an accent, clearly, and if you have a speech impediment (possible, given the characteristics of your speech), then how can you fault Google? Maybe learn to speek better, and then check?
When you said "When's Daylight Savings Time?" it's not out of the realm of possibility that other voice recognition services would interpret that as Wednesday Savings Time" It sounded quite a bit like that.
Additionally, you have a muffled quality to your voice that can trick some speech recognition services.
Apple does voice to text locally, IIRC, which means Siri is bound to have a much higher failure rate when interpreting speech compared to Google Now or Cortana - that's not to mention both Google and Microsoft simply have better tech and more experience at this than Apple.
Whether people where you're from understand you find doesn't matter:
1. Your manner of elocution is common in the area where you're from. Speech characteristics tend to vary by geography, and within a geographical region people tend to share certain characteristics. This makes it easier for them to understand other people in the area that have similar bad speech, since they're accustomed to it. What you hear from yourself, is not what we hear, and it's definitely not what Siri hears.
2. You're speaking to a computer that is hearing you through a microphone, not human ears. Additionally, certain characteristics of your voice - like the muffled quality of it - may or may not be exacerbated by your environment (i.e. noise, even small amounts) or the fact that it's being interpreted through a microphone. This is never going to be a 1:1 reproduction of your real voice, and resulting quality is almost never in your favor. There have bene phones that have put special emphasis on the quality of their Microphones, but those ran Windows hone 8.x (Lumia 920 and variants) and Android (Note 3); not iOS. Apple likes their profit margins. I think the iPhone 6[+] till only has a Mono Microphone?
The fact that you can communicate fine with people where you're from is not suprising - at all. There are people from Louisiana that you would laugh at if they spoke to you - some I can barely understand (I am not a foreigner to this). My roommate is from NH and people are drive thrus routinely need him to repeat himself - his accent isn't nearly as thick as your and his voice is clearer.
But half the people I know, I know for sure would have some serious issues understanding much of what you said in that video. Your Accent is incredibly thick, your manner of elocution would be foreign to them, and the muffled quality of your voice would throw them off as it tends to change the tone of certain vowels as and some consonant sounds to run into each other/blend - this tends to make words sound very different to other people (like a foreigner with a very thick accent trying to speak English).
I'm not really trying to be mean, just trying to explain what the "issue" is.
Microsoft and Google got huge head starts on Apple in harvesting voice data. Microsoft has been doing it in Windows Phone since 2010 and in Windows Mobile since years before then. Google did it with Google 411 and in Android, among other things. They collected all the voices. The flawless ones, the bad ones. Those lacking accents and those with a plethora of it. Almost every type of vocal quality, stutter, impediment, etc.
And they do the voice processing in the cloud a lot as well, which gives them a huge lead on Siri in Accuracy even when the input quality is not so good.