Spark had issues with Unicode for a long time (most often reported with Chinese text). As pointed in the linked ticket (comments) this might be due to physical font (Arial) usage throughout Spark project. This worked 10 years ago, but Unicode world evolves quickly and Spark needs to support ever growing scope of characters. Not sure it's possible to cover them all. Ideally Spark should use a font covering everything (or maybe Spark should use a font available on the system), so users wouldn't need to change the font to make their text readable.