Thursday, May 14, 2009

32bit/64bit programming -- an interesting problem

After being bored of my electronics posts myself, just wanted to write something back in computer science.

Now that 64bit computers have become much common and 64bit programming is becoming a necessity, it has become a need to qualify the word programming with either 32bit or 64bit -- basically because they aren't just totally the same. There have been yesteryear days where we had to qualify 16bit vs 32bit. When I interviewed people in those times, I use to ask them the 'sizeof an integer?' and give them credit if someone asks me back if I was asking about a 16bit compiler or a 32bit compiler (at least if they ask me if it was Turbo C++ or VC++ :)) and a negative mark if the answer was 2 bytes. Slowly the trend changed, 32bit programming started dominating (ie., people had no need/exposure towards 16bit programming at all) and everyone started answering 4 bytes always and I stopped asking that question. Now it's time for the question again :) (btw, I don't claim that the 2byte to 4byte is the only difference between 16bit and 32bit; this was suppose to be a basic question to start with).

64bit programming is complicated in its own ways, primarily because of the inconsistencies in the data models. With a number of data models existing for 64bit (thank God at least only 2 are predominant), it makes it even more complicated. While Linux, Solaris, Mac (and more) are all lined up for a common data model (LP64), Microsoft is as usual onto it's own unique data model (LLP64). Although it is only Microsoft, given the dominance of Microsoft in the OS market, that is good enough to be a compatibility requirement. It is my personal opinion that Microsoft has a point here -- LLP64 invites less changes on 32bit code to become 64bit compatible. And I'm pretty sure this compatibility is going to help MS more than anybody else. Understanding the appropriate data models (and the one that is being used) is important if you are programming on a 64bit platform and it becomes even more important if you want to write code that's compatible with both 32bit and 64bit platforms.

Recently I came across an interesting problem to be thought of, specially if you are writing a library that should be source-wise compatible on both 32bit and 64bit platforms. The problem, discussion and the solution being pretty long, I would talk about it in my next post....stay tuned.


  1. Never heard of the terms (LP64, LLP64 etc..) even though I was aware of the differences in the way sizes of ints, longs and pointers were calculated in 64 bit programming.

    But isn't that primarily a compiler choice (VC++ vs GCC..), rather than an OS choice? Or are you referring to the data models of the OS/CRT libraries, which obviously would dictate the data models of the applications using them?

  2. Yes, it is finally the compiler's choice -- but the compiler in reality does not have a choice. Any application has to link to one or more of the OS system libraries. This enforces the compiler for a particular platform (the OS) to stick to the data model of the OS. Like, VC++ or gcc (infact the same binary executable) will choose the data model based on what platform the application is being compiled for.

  3. 20 issues of porting C++ code on the 64-bit platform

    The forgotten problems of 64-bit programs development

    64 bits, Wp64, Visual Studio 2008, Viva64 and all the rest...

    A 64-bit horse that can count