In a private email, somebody asked of my previous article "Okay, I can see that there must be state since you've created a state machine. But you aren't mutating any variables so where does the state come from?" It's a good question. If you feel you "got it" then skip this article. But if you're still hesitant or think I'm misusing the word "state" please read on.
First, let me remind that I contend that state is 1) something responding to the same inputs differently over time as far as another observer is concerned and 2) an abstraction and not the representation. None-the-less, state always ends up with some representation. For instance, files can represent state. And even users of programs can hold state (quick, I'm in a drunken state and I plan to write a long rant, who am I?).
I want to ignore all but memory based state and show that, ultimately, even the Erlang code must manipulate mutable memory and that message sending effective leaks a little bit of that detail. Now, on Reddit an offended Erlanger called me a Java fanboy. As any of my loyal readers know, it is true that I think Java is the most incredibly awesome language EVER and I can hardly stop writing glowing things about it. But for this article I'll struggle against my natural instincts and use Ruby, Haskell, assembly, and Erlang.
I'll use Ruby to illustrate some points about state and representation. First the tired old state diagram and then some Ruby
class SockMachineSimple def initialize @state = :zerocoins end def insertcoin case @state when :zerocoins @state = :onecoin :nothing when :onecoin @state = :twocoins :nothing when :twocoins :coin end end def pushbutton if @state == :twocoins @state = :zerocoins :tubeofsocks else :nothing end end end
If you don't read Ruby then Ruby control flow structures generally result in the value of their last contained expression so explicit returns aren't needed. :foo is a symbol (kinda like a string). @state declares a field (which is private) named state. I called it state because it so clearly matches to the state space specified by the diagram. But here's another way to write SockMachine with a completely different representation
class SockMachineComplex def initialize @count = 0 end def insertcoin if @count % 3 == 2 :coin else @count = @count + 1 :nothing end end def pushbutton if @count % 3 == 2 @count = @count + 1 :tubeofsocks else :nothing end end end
Instances of this class keep track of the total number of coins or button pushes ever accepted and use the modulus operator to decide what to do about it. The representation is quite different from that of SockMachineSimple, but to anybody using this class the difference is mostly irrelevant - it still conforms to the same state diagram. Internally it has a very large number of states but externally it still has the same three. And here's one last stateful machine
class SockMachineDelegated def initialize @machine = SockMachineComplex.new end def insertcoin @machine.insertcoin end def pushbutton @machine.pushbutton end end
By looking at the source it would appear that this version never mutate any variables after being initialized, but of course it does by calling methods on SockMachineComplex. Hiding the manipulation of representation is not the same thing as making something stateless. And now one last example, one that is totally different because it is not stateful.
class SockMachineStateless def pushbutton machine = SockMachineComplex.new machine.insertcoin machine.insertcoin machine.pushbutton end end
Internally, pushbutton uses a stateful machine but externally it is not stateful. Short of using a debugger you can't observe the state changes in SockMachineStateless's internal machine. Actually, I guess you could monkey patch SockMachineComplex to do some kind of IO or callback to expose the workings so maybe I shouldn't have used Ruby to illustrate my OO points. Too late now.
Hiding State
Okay, but that's OO stuff. Erlang is a functional language and the code never appeared to have any mutable variables or nested mutable objects. So what gives? Well, to illustrate Erlang I'm going to use Haskell, another functional language. Haskell's great advantage to this exposition is that it's really, really easy to get at some nice clean assembly.
So here's a very silly Haskell function for determining if a positive Int is even or odd. Its implementation is silly, but it allows me to make a point.
flurb :: Int -> String flurb n = if n == 0 then "even" else if n == 1 then "odd" else flurb $ n - 2
This is a pure function - it mutates nothing. Yet when compiled with ghc -S -O to get AT&T syntax assembly, the function looks like this
Main_zdwflurb_info: movl (%ebp),%eax cmpl $1,%eax jl .Lczq cmpl $1,%eax jne .Lczo movl $rxA_closure,%esi addl $4,%ebp andl $-4,%esi jmp *(%esi) .Lczo: addl $-2,%eax movl %eax,(%ebp) jmp Main_zdwflurb_info .Lczq: testl %eax,%eax jne .Lczo movl $rxy_closure,%esi addl $4,%ebp andl $-4,%esi jmp *(%esi)
GHC has compiled the flurb function down to a loop and the immutable n variable is represented with the mutable eax register [1]. Some pseudo Haskell/assembly mix might help illustrate
flurb n = movl n, eax -- copy n into the eax register Main_zdwflurb_info: if eax == 0 then "even" else if eax == 1 then "odd" else addl $-2, eax -- decrement eax by 2 jmp Main_zdwflurb_info -- jump to the top of the loop
As you can see, the Haskell code does use mutable "variables" at runtime. This is a common optimization technique in functional languages for dealing with direct tail recursion. But all that machinery is hidden from you as a programmer so just like SockMachineStateless the end result is stateless unless you peel back the covers with a debugger.
Finally, Some Damn Erlang
All right, I've written Ruby and Haskell, generated some assembly, and then written in an impossible chimeric language. But my original question was about Erlang. Here's a simple Erlang counter actor
-module(counter). -export([create/0, increment/1]). create() -> spawn(fun() -> loop(0) end). increment(I) -> I ! self(), receive X -> X end. loop(N) -> receive From -> From ! N, loop(N + 1) end.
Again, no variables are mutated. But if I assume that Erlang does the same kind of direct tail call optimization that Haskell does, the pseudo Erlang/Assembly loop looks like this
loop(N) -> movl N, eax % copy n into the eax register .looptop: receive From -> From ! eax, % send the current value in eax back to the original sender inc eax % increment eax by 1 jmp .looptop % jump back to the top of the loop end.
It's still a loop like the Haskell one, but there's an important difference. Each message receive sends back the current value of the mutable eax register then mutates it. So this behaves a bit like SockMachineDelegated - the original code didn't appear directly manipulate any mutable variables, but there was mutation under the covers and unlike SockMachineStateless but like SockMachineDelegated this change of state is visible beyond the local confines. [2]
Now, there are other ways to deal with recursion. I don't know Erlang's implementation, but it doesn't matter. Something is being mutated somewhere and that change is being made visible by messages being sent back. Typically non tail calls mutate the stack pointer so that it points to new stack frames that hold the current value of arguments, or then again some arguments might stay in registers. Tail calls that aren't direct tail recursion might mutate the stack region pointed to by an unchanging stack pointer. Whatever, it always involves mutation, and it's always hidden from the programmer when using pure functions. But actor loops are not pure functions. The sends and receives are side effects that can allow a little tiny bit of the mutable machine representation to be mapped to state that an observer can witness.
But Really Where Are The Mutable Variables?
So all that's fine, but it doesn't really show where the state is in my original Erlang code. The code didn't have an N to stick in a mutable register or a mutable stack frame. Here's the core of it.
zerocoins() -> receive {coin, From} -> From ! nothing, onecoin(); {button, From} -> From ! nothing, zerocoins() end. onecoin() -> receive {coin, From} -> From ! nothing, twocoins(); {button, From} -> From ! nothing, onecoin() end. twocoins() -> receive {coin, From} -> From ! coin, twocoins(); {button, From} -> From ! tubeofsocks, zerocoins() end.
For this final answer I'm afraid I have to do a bit of hand waving. The explanation is even more abstract than the mutable variable as register/stack local explanation. You see, no matter what, your code always mutates one register: the instruction pointer. Its job is to point to the next instruction to be executed. As the CPU executes instructions the IP moves around, typically just being bumped up a bit for everything but jumps which move it more substantially.
In purely functional code the IP moves around but these moves can't be observed by other parts of the program as they are happening. In other words, in purely functional code, the changing IP is invisible. It's a completely black box unless you use a low level debugger. It's very much like SockMachineStateless where the internals were mutable but invisible to the outside world.
But with message receives the IP can be induced to move around based on things communicated from outside the function. The IPs current instruction can then define in large part what a message received by a process will do. If the IP points at the receive in the "zerocoins()" function then that's one behavior and if it it points to the receive in the "twocoins()" function then that's another. The different behaviors can be observed by other actors via messages sent back to them. When an actor sends a SockMachine a coin or buttonpress message it may indirectly be causing the IP to be mutated. And when it gets back nothing, coin, or tubeofsocks it's really being told, in a very indirect way, something about the value of the IP.
Conclusion
State is not the same thing as representation. None-the-less, with an omniscient view of things you can always find the representation of state. It might be in bits stored on a drive, it might be in neurons in a user's head, or it might be in a computer's memory and registers. The representation might not be directly visible in the source you're looking at but hidden in low level machinery. It might just be the instruction pointer pointing at different parts of your code base.
If you equate state with representation you end up with the very un-useful property that everything is stateful since at some level of representation every bit of executing code on your computer involves stateful processes. This view robs you of the ability to think about high level languages, especially purely functional ones like Haskell, at an appropriate level of abstraction.[3]
Purely functional code ensures that that the state represented by registers and stack pointers and instruction pointers is kept local. But side effects like message sends and receives allow that low level machinery to be used as the representation of shared mutable state even if you don't see it in your code.
In the end, it's far more useful in most cases to think of state from the point of view of an observer than the point of view of its implementation. The SockMachine actor was stateful regardless of how that state was represented simply because other actors could observe it changing state. Digging down into how send and receive allow a modicum of access to the instruction pointer just isn't a useful model for thinking about stateful actors normally. So the short answer to the original question is "Who cares how the state was represented? It was shared mutable state."
Foot notes
- Actually, it appears to be moving n back and forth between memory pointed to by eap and eax, which seems oddly wasteful given that it never branches out of this function. It also does 3 tests when only 2 should be necessary.
- Technically the Erlang code probably can't keep the current value of N in a register when calling receive since receive may cause a task switch to another process, so assume that receive copies registers out to memory before executing then swaps them back in when its done.
- Some people equate state with data, but that's just as bad. Code is data so all code must be stateful - another useless conclusion.
If you want to see the native code for an erlang function you can write hipe:c({M,F,A}, [pp_native]) in a hipe enabled system. This will also native compile the function.
ReplyDeleteM is the module name.
F is the function name.
A is the arity (number of arguments).
Thanks per, I might give that a try sometime.
ReplyDeleteHave you looked at Reia? I'm trying to make a Ruby-like faux mutable state language which runs on the Erlang virtual machine:
ReplyDeletehttp://wiki.reia-lang.org
The generated assembly from the haskell benefits from adding '-fvia-c' and involving GCC in the process, but it still has the unnecessary register->memory->register stuff. No such issue with OCaml.
ReplyDeleteNot sure if it helps make the point, but I used threads to mimic mutable state variables in Erlang to translate the examples of SICP Chapter #3.
ReplyDeleteIndeed, I've done the same thing that Chris Rathman has, modulo the exercises from Chapter 3.
ReplyDeleteThis also illustrates that it is indeed possible to create race conditions, even in Erlang. With a bit more work, it's even possible to demonstrate race conditions without explicitly emulating state variables.
However, Erlang's race conditions don't seem to be as pernicious, by and large. I don't really know why.
Any language with side effects has state, because you can check whether the side effects have happened or not. If you don't have side effects, you might as well not bother, because your program doesn't do anything.
ReplyDeleteAs you point out, Erlang has state because the response to a message is not necessarily always the same. The internal state of a process is encoded in the IP.
The benefit of Erlang is not the lack of state, but that each process is in full control of its state changes. Yes, other processes can send messages, but they don't change the state of the process until it receives that message. If the process never does a receive then all these messages go in the bit bucket. This doesn't eliminate all race conditions or deadlocks, but it does eliminate a large variety of problematic ones.
Interesting article.
ReplyDeleteInstruction pointer and call stack can definitely capture state. And, the state will have to be represented using variables or registers at some lower level, unless you are using functional hardware. :-)
This reminds me of RoboZZle, a robot-programming online game (http://robozzle.com/) that I work on. You program the robot using only functions, movement commands and certain conditionals. Using only that, you can implement programs where the robot cycles through states, counts steps, remembers a sequence of turns and reproduces it later in reverse, etc.
So, just because a program doesn't mutate variables, that doesn't imply it is stateless (which would barely even make sense in a strict interpretation).
yegge?
ReplyDeleteSorry for being so late, but I missed this.
ReplyDeleteI think all the discussion about how low-level state is handled clouds the issue and is basically irrelevant. In the OO example state is kept in a variable which is explicitly mutated, while in Erlang and Haskell state is kept in the arguments to the functions called. So in erlang you basically get:
loop(State) ->
receive
Msg ->
NewState = do_something(Msg, State),
loop(NewState)
end.
You can also extend the loop to calling different functions to show which state you are and encode some state in the current function. For example an FSM.