Precision on size int in C...

The output of 2 to the 30 for a signed int in C on my 32 bit machine


The question is how does my computer get 10 digits of precision?

I thought it might be something like

2^K = 10^N
N = K Log (2) //log is base 10

If I let 1 bit for the sign, then K = 31 bits. Plugging 31 into the
above formula yields N roughly equals 9.33.

Now if I let K = 32, N is about 9.63.

In either case, it seems I'm coming out to 9 digits of precision.