I understand the appeal of pure functional languages like Haskell where you can keep track of side effects like disk I/O using monads.
Why aren't all system calls considered side effects? For example, heap memory allocation (which is automatic) in Haskell isn't tracked. And stack allocation could be a side effect, although I'm not sure it would be useful. Both of these change the overall state of the system.
So where is the line drawn for what is a side effect and what isn't? Is it simply at what's the most "useful"? Or is there a more theoretical foundation?