I doubt it. My handwriting is at least average neatness, and stroke based recognition systems still make multiple errors per sentence. It's just a frustrating waste of time and now that we have touch screen keyboards there's no longer any point to handwriting recognition.
The only handwriting recognition system which ever worked correctly with a low error rate was Palm Graffiti. It forced the user to learn a new shorthand writing style designed specifically to avoid errors.
The secret to Palm Graffiti's market success was that it hacked user expectations.
Because it asked users to learn a new way of writing, when the recognition failed, users were more likely to blame themselves, like, "Oh, I must have not done that Graffiti letter right, I'll try again."
But when it came to recognizing regular (i.e. natural) handwriting, users believed inherently (i.e. somewhat unconsciously) that they already knew how to write, and the machine was new, so mistakes were the machine's fault.
While we're sharing anecdotes, my handwriting is remarkably terrible, and the iPadOS Notes app does a good job of transcribing it.
I think this supports the grandparent's point about using the actual strokes, including angle and azimuth, to reconstruct intent.
I was also fairly proficient with Graffiti, back in the day, but I consider that an input method, not handwriting recognition. I was facile with T9 as well.
Analyzing the individual strokes works flawlessly with Chinese and Japanese, where the stroke order is fixed (occasionally with a few variants). If you have the stroke information and the user writes correctly you can recognize characters that even humans would fail to read from the finished glyphs.
That's great, but I would wager that nearly 100% of all writing ever done in human history was done without capturing the strokes while writing. Therefore, while this added accuracy is great, it is virtually useless for most written work.
Isn't that a bit irrelevant? If we are talking about patterns that work well for the user, clearly writing everything traditionally and then going back and taking pictures of everything is a cumbersome process. Writing on iPad or similar is clearly the medium in which this shines, at which point you do capture the strokes.
That only works if you can assume that everybody using the system you're desiging has access to the underlying technology. Sure, if you're desiging some new system (like an autonomous vehicle on a closed loop, controlled system / system purpose built to perform digit recognition as it is written on it, but why wouldn't you just have the user directly input on a keypad) then you'll get a better result, but in the general, real world, case (autonomous vehicle on city streets with other vehicles / recognizing digits from scanned input without the stroke data) then your special case optimization are impossible and for all general practical purposes do not apply, so appealing to their assistance in increasing accuracy doesn't actually do anything to help the system perform better.
While that's true, having the ability to capture strokes now allows machine-learning models to better determine what potential strokes were used to make a specific shape. Just because we didn't have it for everything doesn't mean it's not useful for adding accuracy to the past.