Saturday, May 30, 2009

My pencil arts - #5 - Lady

First 2 are photographed, while the last one is scanned one.





Sunday, May 17, 2009

32bit/64bit programming -- an interesting problem #2

...continued

I was recently looking at the source of an open-source library. The library is supported on all popular platforms in both 32bit and 64bit. When providing a library for 32bit and 64bit platforms a new problem kicks in. ie., to make sure that the applications using this library uses the correct version of the library. ie., a 32bit application should use the 32bit version of the library and a 64bit application should use the 64bit version of the library. Obviously, it is not possible to cross link the binaries of 32bit and 64bit, and so the linker would fail if the application tried to do so. But, the difficult problem here is to restrict the application from using the wrong header files of the library. ie., a 64 bit application can inadvertantly include the 32bit headers of the library and link against the 64bit version of the library -- and it is quite possible that this will succeed even without a warning (although there are cases where this would not work).

Consider this function:
//
void __cdecl messup(struct my_struct *);
//
A 64bit translation unit that calls this function after #including a 32bit header for this function would just link fine with a 64bit library for the same function. The 32bit version of my_struct and 64bit version of my_struct shall possibly be defined differently by the library due to the data-alignment requirement between 32bit and 64bit for performance reasons (padded with extra bytes?). Thus the application assumes a different structure while the library expects a different structure. This might lead to crash. Aah!

Now that's bad. So what does it finally mean? It does mean that appropriate headers are equally important as the appropriate binaries, but unfortunately lacking the support to enforce from the building tools. To take this problem one step further, given the various data models within 64bit platforms, it is not just the platform that matters, but it is the data model.

To redefine the problem again in its final form: An application that is being built on a X data model should include the headers and libraries that were built for the X data model.

There could potentially be many ways to solve this problem. A quick answer would be to have a common header file for all data models but have ifdef'ed code for each data model in the same file. This has few drawbacks (in my opinion): declarations for all data models need to be in the same file (clutter? maintenance?); it might be very difficult (possible?) to determine the data model in the pre-processor phase, so the right set of declarations go in for compilation (afaik, there does not seem to be a pre-processor directive for the data models and depending on the pre-processor directives for the platform might be too many to handle; what about unknown platforms?).

I was actually impressed by another option that this library I talked about, had used. Actually among the 32bit and 64bit platforms, the predominant data models (LP64, LLP64, ILP32) only differ in the size of long and pointer. This library while generating its own headers (during its build time) puts in the size of the long and pointer, into the header file as it was inferred during the library's compilation. This provides an easier and reliable way of identification of the data model later for which the header was built for.

The header file generation code would be something as simple as this:
//
fprintf(header_file, "#define MYLIB_SIZEOF_LONG %d", (int) sizeof(long));
fprintf(header_file, "#define MYLIB_SIZEOF_PTR %d", (int) sizeof(void*));
//
Now that we have a means to carry forward the metadata of the data model of the library onto the headers, how do we prevent the compilation in an inappropriate data model. The idea used was simple, and should be self-explanatory. The library also added the following code to their header file:
//
static char _somearray_[sizeof(long) == MYLIB_SIZEOF_LONG ? 1 : -1];
static char _somearray2_[sizeof(void*) == MYLIB_SIZEOF_PTR ? 1 : -1];
//
If it isn't obvious, these lines declare an array of size -1 (which is illegal for compilation) incase if the sizes of long and pointer of the application didn't match with the one in the headers. Cool! that's what we need.

I feel that there are 2 tradeoffs I see with this approach:

1. Though the misuse is prevented, the error message isn't friendly. When you use a wrong header file, you get a message saying 'invalid array size' or 'invalid array subscript' or 'an array should have at least one element' etc., One might have to refer to Google to figure out the issue.

2. Two more names are added to the namespace (and 2 bytes) to the current translation unit. Usage of underscores and uncommon names might almost avoid a possibility of a name collision, but still :) I would think of a single struct having one member for each enforcement rule, so there is only 1 symbol added to the global namespace.

Any other solution??

Thursday, May 14, 2009

32bit/64bit programming -- an interesting problem

After being bored of my electronics posts myself, just wanted to write something back in computer science.

Now that 64bit computers have become much common and 64bit programming is becoming a necessity, it has become a need to qualify the word programming with either 32bit or 64bit -- basically because they aren't just totally the same. There have been yesteryear days where we had to qualify 16bit vs 32bit. When I interviewed people in those times, I use to ask them the 'sizeof an integer?' and give them credit if someone asks me back if I was asking about a 16bit compiler or a 32bit compiler (at least if they ask me if it was Turbo C++ or VC++ :)) and a negative mark if the answer was 2 bytes. Slowly the trend changed, 32bit programming started dominating (ie., people had no need/exposure towards 16bit programming at all) and everyone started answering 4 bytes always and I stopped asking that question. Now it's time for the question again :) (btw, I don't claim that the 2byte to 4byte is the only difference between 16bit and 32bit; this was suppose to be a basic question to start with).

64bit programming is complicated in its own ways, primarily because of the inconsistencies in the data models. With a number of data models existing for 64bit (thank God at least only 2 are predominant), it makes it even more complicated. While Linux, Solaris, Mac (and more) are all lined up for a common data model (LP64), Microsoft is as usual onto it's own unique data model (LLP64). Although it is only Microsoft, given the dominance of Microsoft in the OS market, that is good enough to be a compatibility requirement. It is my personal opinion that Microsoft has a point here -- LLP64 invites less changes on 32bit code to become 64bit compatible. And I'm pretty sure this compatibility is going to help MS more than anybody else. Understanding the appropriate data models (and the one that is being used) is important if you are programming on a 64bit platform and it becomes even more important if you want to write code that's compatible with both 32bit and 64bit platforms.

Recently I came across an interesting problem to be thought of, specially if you are writing a library that should be source-wise compatible on both 32bit and 64bit platforms. The problem, discussion and the solution being pretty long, I would talk about it in my next post....stay tuned.

Wednesday, May 13, 2009

LCD Digital Clock

This clock is pretty similar in terms of effort to my previous seven-seg LED based digital clock -- but the outcome is just not comparable. See for yourself.

The only difference between this from my previous clock, is that the display logic now drives the standard 16x2 alpha numeric LCD instead of multiplexing around those 4 seven segment LEDs (infact I don't have to do multiplexing now, so it is even simpler with only one timer as opposed to the earlier clock with 2 timers). I'm not going to talk about the driver code for the 16x2 alphanumeric LCD for two reasons. First, it is pretty complicated to be put in here and would not really fit the audience. Second, this info is available all around the web, it is just the matter of coding the protocol between the uC and the LCD chip.

Here is the LCD clock in action:

Sunday, May 10, 2009

Digital Clock

I have finally managed to build my own digital clock. This is basically 4 seven segment LEDs put together and driven by my micro-controller (an ATMega8).

I had been working on this for few weeks now. The difficult part about making this clock was multiplexing 4 seven segment LEDs. Soldering 4 LEDs to suit the multiplexing circuit was a nightmare. Having a printed circuit on a PCB would be the right way to go; but without it, it is clumsy to build clumsier to debug. I spent a considerable amount of time to get this soldering done -- as it has to be really firm, accurate all within a limited space. Me, not being an experienced guy, it was tough for me. See it for yourself.





Other than this there are only two more hurdles to the problem:

1. Timing a second -- this is the crucial part of the project, although not that difficult. Will explain shortly.

2. Multiplexing 4 seven segments -- previously I had done only two; Also to make that dot (separator between hour and minute digits) to blink every second.

Timing a second:
Usually I clock the uC to run at 1MHz, this time I had clocked it to run at 2MHz (though it wasn't necessary, I thought it might be useful to have precise control and more power to drive 4 seven-segs along with running the clock.).

Anyways, I used a 16-bit counter to measure a second. This counter gets incremented on every cycle. ie., on a 2MHz clock, this counter would get incremented 2 million times a second. This was a bit too much for timing, so I configured the prescaler to bring down the clock for the timer by 1/8th (smallest possible) which is 256KHz (2 ^ 18). Incidentally, it is possible to program the uC to notify you on every overflow of this 16bit counter instead of you checking for an overflow everytime. So the overflow routine would get called for every 2 ^ 16 increments of the counter. With the current clock configuration, the overflow routine should get notified 4 times a second -- this seems good enough to time a second. So for every 4th call on this routine, it increments the seconds counter. The rest is obvious.

Here is the code for the overflow routine:

// g_* are global variables.
ISR(TIMER1_OVF_vect)
{
static int t = 0; // no. of times overflow has happened.

t++;
g_dot_point = (t/2); // dot point stays on for half a second and off for half.

if(4 >= t) {
t = 0;

g_ss++; // increment the seconds
if(g_ss > 59) {
g_ss = 0; g_mm++;
}
if(g_mm > 59) {
g_mm = 0; g_hh++;
}
if(g_hh > 23) g_hh = 0;
}
}
Multiplexing the 4 seven-segs:
If you do not know how multiplexing displays work and if you have not read my earlier post, please consider reading it.

This is pretty similar to my earlier multiplexing code -- just an extension. Now there are 8 data pins (one extra now for dot point) and 4 control lines one per 7segment. The multiplexing is done on the overflow interrupt of a different timer (as 4Hz of timer1 is too slow to multiplex 4 seven-segs). The following code should be self-explanatory.


ISR(TIMER0_OVF_vect)
{
static int n = 0; // decides which digit to update now.(right to left, 0 -> 3)
static int tp[4] = {1, 10, 100, 1000};

int cur_time = g_hh*100 + g_mm;

PORTC = 0;

seg7_write_digit_dot( (cur_time / tp[n]) % 10, // manipulate the appropriate digit
(g_dot_point && n == 2)); // 3rd digit -> print dot if req.

PORTC = 1 << n; // select the right digit by sending the correct control line.

n++; // next digit on next overflow.
if(n >= 4) n = 0;
}

One missing piece in this project is the means to configure the time. The amount of benefit that gives did not excite me for the amount of work required to do that. It was kind of boring stuff. So I have now configured the clock to always start at 13.25 (that is the time I was testing this today), so I can just choose to start the clock at the right time, and then on it just runs fine. Anyways, I can reprogram the clock to whatever time I want to start with. :)

Here is the digital clock in action:

Wednesday, May 06, 2009

Who do you trust?

Courtesy: Pravs World

In life just don’t trust people, who change their feelings with time…

Instead trust those people whose feelings remain the same, even when the time changes…

Saturday, May 02, 2009

Firefox textbox cursor issue

I recently ran across this issue with Firefox (3.0.9).

The problem was with the cursor positioning as I type in any textbox in any website in Firefox. The cursor does not always move as we type, but sometimes moves to the next position (like normally). See how the text gets garbled as I was typing 'download firefox' in a Google search box in Firefox.



This started happening all of a sudden and I had no idea what was the problem. I was initially casual, thinking it was a bug in Firefox and restarted Firefox (with Save and Quit option). When firefox restarted and restored all the tabs, the problem was not gone. I got a bit worried then. I was worried if this was a side-effect of a phishing attack. I checked with other browsers and things were fine; I was at least happy that my computer was not compromised; if at all it was, it was only Firefox. I enabled network watch in Firebug, and tried watching for all outgoing URLs specially from pages where I enter passwords (of course, by giving wrong passwords) but no sign of any malfunction. I also have greasemonkey enabled, so I was worried if any greasemoney script got installed without my knowledge; but no, there were no other scripts other than the ones I have for my own use. Now it was starting to get beyond me; and that's when I remembered I did not "really" shutdown firefox, but only hibernated (save and quit). My only hope was that, there could be some webpage which triggered a bug (possibly in adobe flash player or jvm?) which gets reproduced every time I restore the same set of tabs, thus leaving the restart having no effect on the issue. So I did a clean shutdown of firefox (quit without save) and started fresh; Voila! it's gone now. It never happened again; as of this moment I assume it is just a bug and my data was not compromised! :)