Exception thrown by listener should be handled
When event listeners are invoked, an exception thrown by any of them should not cause processing to cease. Instead, the exception should be logged, and processing should continue.
Spark is very extensible in nature. All kinds of plugins can add functionality. I've found that Spark would stop processing messages completely, if one of the 'add-ons' would throw an unexpected problem. With the fix for this issue, the root cause of the problem won't be fixed (I don't know what it is), but the effect would now be isolated to that 'add-on'. It would no longer prevent further processing of the message. As a side-effect, the root cause should now be logged in the error logs of Spark. That will allow us to address that too.