Yeah; I've recently noticed that almost every time I use string.length in javascript, its wrong and going to break something as soon as emoji appears. In my code, I always want to deal with either the number of codepoints or the number of UTF8 bytes. String.length gives you neither, but unfortunately it looks correct until you test with non-ASCII strings.
Yeah; its really confusing but javascript - for legacy reasons - treats strings as "arrays of UCS2 items". But javascript also implements iterator on strings which iterate through strings in unicode codepoints. Thats why "of" loops work differently from "in" loops. (for-of in javascript uses Symbol.iterator). That also means you can pull a string apart into an array of unicode codepoints using [...somestring].
In JS, for(i in emoji) will iterate twice, but for(i of emoji) will iterate once.
;)