Saturday, May 30, 2009

My pencil arts - #5 - Lady

First 2 are photographed, while the last one is scanned one.





Sunday, May 17, 2009

32bit/64bit programming -- an interesting problem #2

...continued

I was recently looking at the source of an open-source library. The library is supported on all popular platforms in both 32bit and 64bit. When providing a library for 32bit and 64bit platforms a new problem kicks in. ie., to make sure that the applications using this library uses the correct version of the library. ie., a 32bit application should use the 32bit version of the library and a 64bit application should use the 64bit version of the library. Obviously, it is not possible to cross link the binaries of 32bit and 64bit, and so the linker would fail if the application tried to do so. But, the difficult problem here is to restrict the application from using the wrong header files of the library. ie., a 64 bit application can inadvertantly include the 32bit headers of the library and link against the 64bit version of the library -- and it is quite possible that this will succeed even without a warning (although there are cases where this would not work).

Consider this function:
//
void __cdecl messup(struct my_struct *);
//
A 64bit translation unit that calls this function after #including a 32bit header for this function would just link fine with a 64bit library for the same function. The 32bit version of my_struct and 64bit version of my_struct shall possibly be defined differently by the library due to the data-alignment requirement between 32bit and 64bit for performance reasons (padded with extra bytes?). Thus the application assumes a different structure while the library expects a different structure. This might lead to crash. Aah!

Now that's bad. So what does it finally mean? It does mean that appropriate headers are equally important as the appropriate binaries, but unfortunately lacking the support to enforce from the building tools. To take this problem one step further, given the various data models within 64bit platforms, it is not just the platform that matters, but it is the data model.

To redefine the problem again in its final form: An application that is being built on a X data model should include the headers and libraries that were built for the X data model.

There could potentially be many ways to solve this problem. A quick answer would be to have a common header file for all data models but have ifdef'ed code for each data model in the same file. This has few drawbacks (in my opinion): declarations for all data models need to be in the same file (clutter? maintenance?); it might be very difficult (possible?) to determine the data model in the pre-processor phase, so the right set of declarations go in for compilation (afaik, there does not seem to be a pre-processor directive for the data models and depending on the pre-processor directives for the platform might be too many to handle; what about unknown platforms?).

I was actually impressed by another option that this library I talked about, had used. Actually among the 32bit and 64bit platforms, the predominant data models (LP64, LLP64, ILP32) only differ in the size of long and pointer. This library while generating its own headers (during its build time) puts in the size of the long and pointer, into the header file as it was inferred during the library's compilation. This provides an easier and reliable way of identification of the data model later for which the header was built for.

The header file generation code would be something as simple as this:
//
fprintf(header_file, "#define MYLIB_SIZEOF_LONG %d", (int) sizeof(long));
fprintf(header_file, "#define MYLIB_SIZEOF_PTR %d", (int) sizeof(void*));
//
Now that we have a means to carry forward the metadata of the data model of the library onto the headers, how do we prevent the compilation in an inappropriate data model. The idea used was simple, and should be self-explanatory. The library also added the following code to their header file:
//
static char _somearray_[sizeof(long) == MYLIB_SIZEOF_LONG ? 1 : -1];
static char _somearray2_[sizeof(void*) == MYLIB_SIZEOF_PTR ? 1 : -1];
//
If it isn't obvious, these lines declare an array of size -1 (which is illegal for compilation) incase if the sizes of long and pointer of the application didn't match with the one in the headers. Cool! that's what we need.

I feel that there are 2 tradeoffs I see with this approach:

1. Though the misuse is prevented, the error message isn't friendly. When you use a wrong header file, you get a message saying 'invalid array size' or 'invalid array subscript' or 'an array should have at least one element' etc., One might have to refer to Google to figure out the issue.

2. Two more names are added to the namespace (and 2 bytes) to the current translation unit. Usage of underscores and uncommon names might almost avoid a possibility of a name collision, but still :) I would think of a single struct having one member for each enforcement rule, so there is only 1 symbol added to the global namespace.

Any other solution??

Thursday, May 14, 2009

32bit/64bit programming -- an interesting problem

After being bored of my electronics posts myself, just wanted to write something back in computer science.

Now that 64bit computers have become much common and 64bit programming is becoming a necessity, it has become a need to qualify the word programming with either 32bit or 64bit -- basically because they aren't just totally the same. There have been yesteryear days where we had to qualify 16bit vs 32bit. When I interviewed people in those times, I use to ask them the 'sizeof an integer?' and give them credit if someone asks me back if I was asking about a 16bit compiler or a 32bit compiler (at least if they ask me if it was Turbo C++ or VC++ :)) and a negative mark if the answer was 2 bytes. Slowly the trend changed, 32bit programming started dominating (ie., people had no need/exposure towards 16bit programming at all) and everyone started answering 4 bytes always and I stopped asking that question. Now it's time for the question again :) (btw, I don't claim that the 2byte to 4byte is the only difference between 16bit and 32bit; this was suppose to be a basic question to start with).

64bit programming is complicated in its own ways, primarily because of the inconsistencies in the data models. With a number of data models existing for 64bit (thank God at least only 2 are predominant), it makes it even more complicated. While Linux, Solaris, Mac (and more) are all lined up for a common data model (LP64), Microsoft is as usual onto it's own unique data model (LLP64). Although it is only Microsoft, given the dominance of Microsoft in the OS market, that is good enough to be a compatibility requirement. It is my personal opinion that Microsoft has a point here -- LLP64 invites less changes on 32bit code to become 64bit compatible. And I'm pretty sure this compatibility is going to help MS more than anybody else. Understanding the appropriate data models (and the one that is being used) is important if you are programming on a 64bit platform and it becomes even more important if you want to write code that's compatible with both 32bit and 64bit platforms.

Recently I came across an interesting problem to be thought of, specially if you are writing a library that should be source-wise compatible on both 32bit and 64bit platforms. The problem, discussion and the solution being pretty long, I would talk about it in my next post....stay tuned.

Wednesday, May 13, 2009

LCD Digital Clock

This clock is pretty similar in terms of effort to my previous seven-seg LED based digital clock -- but the outcome is just not comparable. See for yourself.

The only difference between this from my previous clock, is that the display logic now drives the standard 16x2 alpha numeric LCD instead of multiplexing around those 4 seven segment LEDs (infact I don't have to do multiplexing now, so it is even simpler with only one timer as opposed to the earlier clock with 2 timers). I'm not going to talk about the driver code for the 16x2 alphanumeric LCD for two reasons. First, it is pretty complicated to be put in here and would not really fit the audience. Second, this info is available all around the web, it is just the matter of coding the protocol between the uC and the LCD chip.

Here is the LCD clock in action:

Sunday, May 10, 2009

Digital Clock

I have finally managed to build my own digital clock. This is basically 4 seven segment LEDs put together and driven by my micro-controller (an ATMega8).

I had been working on this for few weeks now. The difficult part about making this clock was multiplexing 4 seven segment LEDs. Soldering 4 LEDs to suit the multiplexing circuit was a nightmare. Having a printed circuit on a PCB would be the right way to go; but without it, it is clumsy to build clumsier to debug. I spent a considerable amount of time to get this soldering done -- as it has to be really firm, accurate all within a limited space. Me, not being an experienced guy, it was tough for me. See it for yourself.





Other than this there are only two more hurdles to the problem:

1. Timing a second -- this is the crucial part of the project, although not that difficult. Will explain shortly.

2. Multiplexing 4 seven segments -- previously I had done only two; Also to make that dot (separator between hour and minute digits) to blink every second.

Timing a second:
Usually I clock the uC to run at 1MHz, this time I had clocked it to run at 2MHz (though it wasn't necessary, I thought it might be useful to have precise control and more power to drive 4 seven-segs along with running the clock.).

Anyways, I used a 16-bit counter to measure a second. This counter gets incremented on every cycle. ie., on a 2MHz clock, this counter would get incremented 2 million times a second. This was a bit too much for timing, so I configured the prescaler to bring down the clock for the timer by 1/8th (smallest possible) which is 256KHz (2 ^ 18). Incidentally, it is possible to program the uC to notify you on every overflow of this 16bit counter instead of you checking for an overflow everytime. So the overflow routine would get called for every 2 ^ 16 increments of the counter. With the current clock configuration, the overflow routine should get notified 4 times a second -- this seems good enough to time a second. So for every 4th call on this routine, it increments the seconds counter. The rest is obvious.

Here is the code for the overflow routine:

// g_* are global variables.
ISR(TIMER1_OVF_vect)
{
static int t = 0; // no. of times overflow has happened.

t++;
g_dot_point = (t/2); // dot point stays on for half a second and off for half.

if(4 >= t) {
t = 0;

g_ss++; // increment the seconds
if(g_ss > 59) {
g_ss = 0; g_mm++;
}
if(g_mm > 59) {
g_mm = 0; g_hh++;
}
if(g_hh > 23) g_hh = 0;
}
}
Multiplexing the 4 seven-segs:
If you do not know how multiplexing displays work and if you have not read my earlier post, please consider reading it.

This is pretty similar to my earlier multiplexing code -- just an extension. Now there are 8 data pins (one extra now for dot point) and 4 control lines one per 7segment. The multiplexing is done on the overflow interrupt of a different timer (as 4Hz of timer1 is too slow to multiplex 4 seven-segs). The following code should be self-explanatory.


ISR(TIMER0_OVF_vect)
{
static int n = 0; // decides which digit to update now.(right to left, 0 -> 3)
static int tp[4] = {1, 10, 100, 1000};

int cur_time = g_hh*100 + g_mm;

PORTC = 0;

seg7_write_digit_dot( (cur_time / tp[n]) % 10, // manipulate the appropriate digit
(g_dot_point && n == 2)); // 3rd digit -> print dot if req.

PORTC = 1 << n; // select the right digit by sending the correct control line.

n++; // next digit on next overflow.
if(n >= 4) n = 0;
}

One missing piece in this project is the means to configure the time. The amount of benefit that gives did not excite me for the amount of work required to do that. It was kind of boring stuff. So I have now configured the clock to always start at 13.25 (that is the time I was testing this today), so I can just choose to start the clock at the right time, and then on it just runs fine. Anyways, I can reprogram the clock to whatever time I want to start with. :)

Here is the digital clock in action:

Wednesday, May 06, 2009

Who do you trust?

Courtesy: Pravs World

In life just don’t trust people, who change their feelings with time…

Instead trust those people whose feelings remain the same, even when the time changes…

Saturday, May 02, 2009

Firefox textbox cursor issue

I recently ran across this issue with Firefox (3.0.9).

The problem was with the cursor positioning as I type in any textbox in any website in Firefox. The cursor does not always move as we type, but sometimes moves to the next position (like normally). See how the text gets garbled as I was typing 'download firefox' in a Google search box in Firefox.



This started happening all of a sudden and I had no idea what was the problem. I was initially casual, thinking it was a bug in Firefox and restarted Firefox (with Save and Quit option). When firefox restarted and restored all the tabs, the problem was not gone. I got a bit worried then. I was worried if this was a side-effect of a phishing attack. I checked with other browsers and things were fine; I was at least happy that my computer was not compromised; if at all it was, it was only Firefox. I enabled network watch in Firebug, and tried watching for all outgoing URLs specially from pages where I enter passwords (of course, by giving wrong passwords) but no sign of any malfunction. I also have greasemonkey enabled, so I was worried if any greasemoney script got installed without my knowledge; but no, there were no other scripts other than the ones I have for my own use. Now it was starting to get beyond me; and that's when I remembered I did not "really" shutdown firefox, but only hibernated (save and quit). My only hope was that, there could be some webpage which triggered a bug (possibly in adobe flash player or jvm?) which gets reproduced every time I restore the same set of tabs, thus leaving the restart having no effect on the issue. So I did a clean shutdown of firefox (quit without save) and started fresh; Voila! it's gone now. It never happened again; as of this moment I assume it is just a bug and my data was not compromised! :)

Monday, April 27, 2009

Remote surveillance on your mobile phone

I assume that you have read my previous post on 'streaming webcam using VLC' that describes how to use VLC to stream your webcam's video over the network.

This opens up a new and simple means for surveillance. The idea becomes more interesting and useful based on the network that we choose and where the video is viewed from. To me, if I were to view the video from some other comp, the usability decreases a lot -- unless you are streaming video from home and want to have an eye from your office comp over the Internet; yes, but there are cheaper and better ways to do the same.

I was keen in trying to perform surveillance on a mobile phone and was pretty much fascinated when I could do it. It is really awesome to watch a place in real-time from a remote place and that too wirelessly on a mobile. Now that we know how to stream the video over a network, the only missing link is to figure out a way to establish a network between your mobile phone and your comp.

There are multiple ways to do it:

1. Bluetooth PAN (Personal Area Network): This is the simplest, cheapest and comes at no running cost. Modern bluetooth devices provide upto 100m range, but remember you might have to check with your phone's capability also. I would NOT prefer this as this might tend to disconnect and there is no easy way to reconnect remotely. But it works. I sometimes use it to have an eye on my office cube (for no reason :) ) when I'm just around it.

2. Internet: This is cheaper to establish but has a running cost (specially the data charges on the mobile side are usually hefty). Given that we are aiming at transferring video (atleast QVGA), the bandwidth usage will cost a lot of money; the speed of the network might also be an issue (although a high speed EDGE service on the mobile side might be enough). However, this gives the maximum possible range of surveillance. Literally, from anywhere in the world.

3. Wi-fi: This option is similar to option 1, but much more reliable than a bluetooth PAN. Automatic recovery from signal failures is a plus. I prefer this the most, because my office is fully equipped with Wi-fi. In fact, our other offices (including the ones overseas) are all interconnected, so I can really watch my cube (where I broadcast) from my mobile wirelessly from any of my offices. It's really cool (at least for the first few times). Wi-fi drains battery much faster than bluetooth (as of this writing) though -- so may not be suitable for continuous surveillance.

4. Combination: A combination of these options shall also be applied. E.g., I can choose Internet (broadband) on the broadcasting side, and use Wi-fi (maybe in office?) on the mobile side.

How to view on the mobile:

I'm only going to talk about Windows Mobile here (although I believe the same software is available for Symbian phones too). All you need is a video player for streaming video. Based on the platform you have, you can find one. Note that you need to get a player that supports the protocol and codec you used while streaming.

For Windows Mobile, users can choose to use the free TCPMP (The Core Pocket Media Player) or the professional edition of the same called as the CorePlayer. I personally believe that the CorePlayer is the best for playing streaming video.

Sunday, April 26, 2009

Streaming webcam using VLC

VLC is definitely more than just a video player. It has lot of interesting features and extensions which are not explored by all. By enabling one of its various input interfaces, it is even possible to program against your VLC player -- I had written a clip-list application quite sometime back that automatically directs vlc player to play only portions of a given video (maybe a post later).

I'm not really interested in streaming my webcam but this was actually useful for me for a different reason. I actually started writing a post on that, and felt that this topic is worth a post by itself -- some people might just want to stream webcam.

It's pretty simple.
  1. Start VLC (all my instructions/snapshots will be as of vlc 0.9.6).
  2. Before proceeding further, let us open the VLC's console, so we know if there is any error during the process. To open the console, Menu: Tools -> Add Interface -> Console. VLC will throw log messages into this console.
  3. Menu: Media->Stream (or ctrl -S)
  4. Choose the 'Capture Device' tab (btw, you can stream a video/audio file/DVD using the appropriate tabs)
  5. Under the 'Video device name' drop down choose your camera (you can even stream your desktop by choosing it in 'Capture Mode').
  6. Click on Stream. A new window pops up. This is where you provide the streaming options.



  7. A simple method is to stream over HTTP -- this specially helps to get across firewalls/networks without glitch. Provide the IP address of the interface in which you want to stream your video. Eg., if you have a multi-homed computer, you might want to bind it only to your private network and not your internet IP. Choose an appropriate port of your choice. Even 80 would do.
  8. Under Profile, choose Windows (wmv/asf) -- If you understand, you can opt to choose the right profile as you see fit.



  9. Now click on stream and your video should start streaming. If everything was fine, you should see a 'creating httpd' message in the console without any other relevant error messages following it (sometimes you might not have an appropriate encoder or the port binding might fail etc.,). Also the VLC player UI's status pane should show 'Streaming'.
That's it. Now to view the streaming video on any other machine in the network,
  1. open VLC on any other machine
  2. Menu: open Network (or control - N)
  3. Select HTTP in protocol and the IP address of the machine where you are streaming. The port number stays disabled for me (Workaround: change the protocol to RTP, change the port and change the protocol back to HTTP :) )
  4. Click on Play.

Saturday, April 18, 2009

I wish I were a doctor

I'm an engineer by education and profession; all along I've been happy and so satisfied about it; but not any more.

Engineering explains most of the things that happen around you every day. As I remember
  • 'newton's 3rd law' while walking;
  • 'doppler's frequency shift' from the horn of a speeding car/bike;
  • 'raleigh's scattering' looking at the orange sky;
  • 'frequency spectrum' while on a traffic signal pondering about why red is stop and green is go;
  • 'acetic acid' when the bearer serves me vinegar for the fried rice;
  • 'potential difference' when a crow casually sits on a metallic electric wire;
(and more ...)

....I've always enjoyed my education (as if I were Neo looking at The Matrix :P). No doubt that I still enjoy my education, but there is rather something else that is much more important to understand than all these --- yes, that's our human body.

=== what follows is my own understanding of a disease; do not rely on the information here if you are looking for some critical information on this disease. ===


I was totally devastated when I got to know about this disease (not sure if I can call it a disease) called 'Guillain-Barré Syndrome'; commonly referred as GB syndrome. It is mentioned that this disease is very uncommon and the chances of this disease is just 1 or 2 in 1,00,000. What took me to surprise was the complication of the disease; I never had imagined such a problem was possible. In simple terms, GB-syn is a situation wherein our body's immune system starts destroying our own nerve cells! OMG!!! Apparently there seems to be a generic term for this kind of complication -- autoimmunity. As time proceeds, the disease gets worse, wherein there are too many antibodies generated to act against our own self. A friend of mine is affected by this disease and hence I know about this; the disease progresses at very fast pace -- to given an example, my friend suspected some abnormality on 1st day and went to the doctor; on 2nd day he felt so weak but managed to go the doctor with his friend; on 3rd day he was paralyzed literally and could not move :( The disease is so complicated and the attack is so acute that lack of an immediate medical intervention might even result to death.

In spite of understanding so many things around us, there are things within us which we don't understand and those can put us to stop. To me, if I don't have any idea of how my body works, I don't have to be proud of knowing how a computer works!! after all nothing is more crucial than our lives. Now that it is too late for me to realize or react, I can only wish I were a doctor, to have understood at least a portion of my body!!

One thing is clear that happiness/sadness is subjective and relative. Someone who has GB-syndrome would really not worry about this economic slowdown or losing job or about a huge homeloan on a declining real-estate; there are always worse things in this world; so be happy for what you have and enjoy your days!!

Thursday, April 09, 2009

Building a serial port logic convertor

As mentioned in my previous post, it is not possible to directly connect the serial port pins to the uC's pins due to the difference in logic levels. Let me talk about what is the difference and how we can build a serial port logic convertor (note: i'm just posting the summary of all the information I collected, so it is all available in one place for someone else).

Serial port (RS232) logic levels:
In a serial port, a logic level of 1 is denoted by any voltage between -3V to -25V and a logic level of 0 is denoted by any voltage between +3V to +25V. While, for a uC, logic level of 0 is 0V to +0.8V and logic level of 1 is 2.2V to +5V.

Now, there are two problems to be solved:
  1. The serial port's have a wide operating voltage -- the worst case being 50V
  2. The logic levels are totally different and incompatible with uCs (TTL).
I came across a naive seral port logic convertor which just makes use of a voltage regulator (LM7805) to bring down the voltage to required levels -- but I believe the fundamental assumption there is that, the serial ports work on a voltage above 5V, but this isn't necessarily true according to the standards. That said, it seems that most serial ports work off an unwritten standard with a operating voltage of -12V for logic 1 and +12V for logic 0. But devising a circuit on this assumption is probably going to hit us sooner or later.

A common elegant solution is to make use of a MAX232 IC that does the job for us. I got a MAX232 in PDIP 16 pins (8 + 8). The IC can drive 2 serial port I/O and make them available at TTL. The connections are fairly simple. The following is the schematic that I took from their datasheet.



PIN configurations for a standard serial port:
  • PIN2 -- output pin of serial port (should go into the input pin of MAX232 -- output should be read),
  • PIN3 -- input pin of serial port (should go into the output pin of MAX232 -- input should be sent),
  • PIN5 -- ground
I managed to build my own serial port convertor on a general purpose PCB. This is how it looks after it was soldered.



Under the board (the nasty soldering):


Before I integrate this convertor with my uC and fiddle around, I should make sure this works. Otherwise, it might become very difficult to isolate the problem if I made a software error later. The approach is pretty simple, just short-circuit the output and input PINs on the uC side in the convertor, thus creating a loopback serial convertor. Basically, this circuit will just send back whatever comes in -- in software terms, an echo server.

The circuit can be tested by connecting the circuit to the computer's serial port and then using HyperTerminal in windows to connect the COM port in which it is connected. It is important to choose 'Hardware control' as None. Now if everything goes well, just start typing on the hyperterminal and you should see what you are typing. That proves that the serial port logic convertor works fine (by looping back).

Here is the working circuit in action:

Sunday, April 05, 2009

Computer and micro-controller communication

After my digital thermometer, I thought it is worthwhile to integrate my micro controller (uC) projects to my computer. There are tonnes of advantages that I see by doing this. While working uCs, specially as being a computer programmer, I found it difficult to achieve a number of stuff. Most basic things like input / output aren't readily available (note that I'm not blaming the uCs here; after all we are not writing software here, but building hardware) -- this makes it very difficult to debug or build prototypes. These days I use LEDs (and different blink rates) to debug various scenarios. But imagine how I would have debugged my first LED blink project :) -- there was simply no way. Not just debugging, allowing uC to talk to the computer (when required) opens up a whole new world of communication. There are so many features that become readily available, like
  • keyboard - I can configure my uC parameters at runtime from my keyboard?
  • monitor - I can send some debug output or runtime logs to my comp and record it?, you know much work I did to show just 2 digits in hardware.
  • processing power - not sure how useful it is but if I need I can make use of the huge processing power a comp has.
  • internet - this is very interesting. maybe my uC doesn't need internet but how about controlling my uC from the internet ?? maybe, controlling my room's lights from office ? office -> internet -> webserver on my home comp -> uC -> light; I think this opens up a lot of new opportunities.
Let's see how it evolves :) I have no idea about how I'm going to utilize, but I'm convinced that given my programming knowledge on computers, it is quite a useful thing to have. Technologies becomes more and more powerful only when they interact.

Now that we are convinced that a computer interface is useful, it is time think about the choices. The possible options are serial port, parallel port, USB, bluetooth, infra-red. Bluetooth and infra-red need hardware counterpart on both sides (on the computer and uC hardware), so that isn't practical for me now. USB seems a option, but that also requires a USB protocol handler on the uC side (there are some free open source USB drivers available for AVR uC's but they would occupy considerable code space -- and with just 8KB available for programming, I would rethink). Serial and parallel ports are the simplest options. Over these two I prefer serial port for two reasons.
  1. Serial port requires fewer number of lines for transmission/reception (technically only 2 lines for data(tx/rx), but additional lines for vcc,gnd etc., add up to 4-5 lines, still much better than parallel port).
  2. ATMega8 has builtin support for handling USART - the standard serial transmission protocol. This makes it handy for us to talk over serial port between the computer and the uC.
So I chose serial port as the preferred communication interface. This does not mean that I can just connect a few wires from serial port of the computer to my uC -- because the logic levels are different between the serial port and the uC. I need a convertor for these logic levels before I can communicate. How? stay tuned.

Monday, March 30, 2009

Digital Thermometer

This is where I was heading to. With the last module, I was ready with 2 digit 7segment LED which could be used to show the current ambient temperature.

The only remaining part is to integrate the temperature sensor into the system, read, decode, and display the reading. I used the LM35 temperature sensor which is quite simple and handy to use (at the size of a transistor). LM35 is a centigrade temperature sensor and has 3 terminals -- VCC, Vout and GND. Connect VCC and GND with a 5V across, and you can calculate the current ambient temperature based on the potential available at Vout. Based on the datasheet of LM35, Vout is set to (0mV + 10 mV/degree). So a 100mV at Vout means a 10 degree centigrade temperature sensed.

Now, the remaining task is to make the micro controller (I use ATMega8) read this reading. uC deals only with digital data. This being an analog data, it has to ideally be fed through a Analog-to-Digital-Convertor (ADC). Incidentially, ATMega8 has an inbuilt ADC (with 6 channels in PDIP package). For the ADC to decode the analog data properly, the ARef (Pin 21 in PDIP) terminal has to be set to a reference voltage. To give an example, if the reference voltage is 5V, one unit in a 10bit ADC is defined as (5/1024) volt ie., ~5mV. So for every 5mV from the analog input (in our case, LM35), the reading from ADC goes up by 1 unit.

In my case, the ARef is set to 4.85V. Hence one ADC unit is (4.85/1024) volt ie., 4.736 mV. As discussed earlier LM35 outputs 10mV per degree Centigrade; so my temperature reading is (adc_reading * 4.736 / 10) or (adc_reading * 0.4736) deg. Centigrade.

Hardware:
The hardware part is just connecting the LM35 to my previous module. The output of LM35 is connected to ADC channel-2 (PIN 25 in PDIP) -- as channel 0,1 are shared with PORTC's 0-1 bits which I have been using as control bits for selecting the 7segment digit in TDM mode.

Software:
After enabling ADC channel-2, the ADC's current value is read and the temperature is calculated using the above derived formula. The value is stored in a global volatile variable which is displayed in the 7segs as in my previous module. The temperature is read every 2 seconds (just arbitrary).

Here is the code:

// Author : Gerald Naveen A (ageraldnaveen at gmail dot com)

#include <avr/io.h>
#include <avr/interrupt.h>

#define F_CPU 1000000

#include <util/delay.h>

static volatile uint16_t g_temp_c = 99;

// insert TDM based seven segment code here... interrupt handling etc.,
// didn't want to bloat the codespace while publishing.

void initialize_adc()
{
ADMUX = (1 << REFS0);
ADCSRA = (1 << ADEN) | 7; // enable && prescaler /128
}

uint16_t read_adc_channel(unsigned int ch)
{
uint16_t result;

ADMUX |= (ch & 0x07); // enable ADC channel 7

ADCSRA |= (1 << ADSC); // start conversion

while(!(ADCSRA & (1 << ADIF))); // wait for conversion to complete

result = ADC; // read the result

ADCSRA |= (1 << ADIF); // signal done to ADC

return result;
}

int main()
{
sei();

initialize_adc();
// initialize timer etc., as my previous module

while(1) {
// reading from channel 2
uint16_t val = (uint16_t) read_adc_channel(2) * 0.4736;
if(val < 100) // just to avoid noise, have a upper limit (100 too big?)
g_temp_c = val; // send it for display

_delay_ms(2000);
}
return 0;
}
Here is a snapshot of my setup showing the temperature inside my refrigerator :) I could not shoot a meaningful video, as the project shows an almost constant number. The temperature was actually showing 8 degrees when I opened the fridge after putting my project inside for around 5 minutes; when I opened the door and while I was trying to place the breadboard upright for the digits to be visible and clicked, the temperature had shot up by a few degrees due to the door being open :D


We Must Be Silent

Courtesy: Pravs World

Before we can lead, we must serve.

Before we can serve, we must prepare.

Before we can prepare, we must learn.

Before we can learn, we must listen.

Before we can listen, we must be silent.

Sunday, March 29, 2009

So Live Today!

Courtesy: Pravs World

There are two eternities
that can really break you down.

Yesterday and Tomorrow.
One is gone and the other doesn’t exist.
So Live Today!

Friday, March 27, 2009

Multiplexing two 7segment LEDs

This is a follow up on my previous post on 7segment LED display.

When it comes to displaying 2 digits, there are at least 2 choices. The simplest choice is: In addition to the existing 7bits for the first digit, add 7 more data bits and let them drive the second digit. The obvious drawback with this approach is the need for large number of data lines. With increase in the number of digits, you need 7bits for each additional digit. At some point the idea does not scale and goes impractical.

The second choice is to use Time Division Multiplexing (TDM). In this approach the same data bus (7bit always) is used to show the digits across all the 7segment LEDs. A separate control signal is added (1 bit per digit -- simple appraoch; ideally 'log (base 2) n' control lines are enough for n digits). The control signal is used as 'chip-select' to select the appropriate digit and the data at the data bus at that moment is used to light up that segment appropriately. An important caveat in TDM is that, the 7Seg LEDs will not retain the digit when the control transfers to the next segment (obvious?). As a result only one 7seg will be lit at any point in time. Thanks to the persistence-of-vision property of the human eye, by switching the control between the LEDs at a fast pace, it is possible to "virtually" light up more than one 7seg at the same time. And that's the idea behind this project.

Hardware:
The 7bit data bus control the digit to be displayed (as in my previous post with single 7seg). Additionally, 2 control lines, each connected to the common anode of each of the 7seg select the digits by supplying the positive voltage(+5V). It is actually a good idea to connect the control signals to the base of a transistor and use the transistor as a switching device to turn on/off the positive voltage to the LED -- I don't have transistor at the moment; given that the 7seg does not draw too much current, it was safe to drive them directly from the uC's output pins. I would not recommend this though.

Software:
The software part is little complicated. The idea of the program is to display 2 digits of a running counter. The counter has to be incremented at a slow pace (once per second?) so human eye can follow the counter. However, the 7segs have to be refreshed at a very high rate otherwise we would see flickering of digits (remember only one of them is lit at any moment). To implement this, it is possible to run a loop with few ms sleep interval and keep refreshing the digits; and increment the counter only after every 100 iterations (so in effect the counter is incremented only after a second or so). This is naive and may not scale when there is more functionality than just incrementing the counter. So the ideal method is to make use of timer interrupts. ATMega8 has 3 timers. I have made use of timer0. Once enabled, whenever the counter belonging to the timer (in this case TCNT0) overflows beyond its size (in this case 8 bit), the uC invokes the appropriate interrupt handler. In the interrupt handler, I've written code to update one digit at every invocation.

Here is the code:

/* Author: Gerald Naveen A (ageraldnaveen at gmail dot com) */

// Write the digit on PORTD (0-7 bits)
// Select the digit on PORTC (0-1 bits)
#include <avr/io.h>
#include <avr/interrupt.h>

#define F_CPU 1000000 // 1MHz
#include <util/delay.h>

//my implementation that wraps writing a digit to 7seg
//implements seg7_write_digit
#include <gerald/7seg.h>

// volatile makes sense
volatile int g_cur_val = 0;

void initialize_timer0()
{
TCCR0 |= (1 << CS01); // configure the prescaler for timer0
TIMSK |= (1 << TOIE0); // enable timer0 interrupt
TCNT0 = 0; // initialize timer0 counter to 0
}

// the TIMER0 overflow interrupt handler
ISR(TIMER0_OVF_vect)
{
static int n = 0; // decides which digit to update now.

if(!n) {
// make sure you disable the control signal before changing
// the data bits. otherwise you can notice small leakage of
// data onto other digit.
PORTC = 0;
seg7_write_digit(g_cur_val % 10); // output ones
PORTC = 0x1;
}
else {
PORTC = 0;
seg7_write_digit((g_cur_val/10) % 10); // output tens
PORTC = 0x2;
}
n = !n; // toggle the digit selection
}

int main()
{
DDRD = 0x7F;
DDRC = 0x03;
PORTD = 0xFF;
PORTC = 0; // disable control signals by default

sei(); // enable global interrupts

initialize_timer0();

while(1)
{
g_cur_val++; // just keep incrementing the counter
_delay_ms(100);
}
return 0;
}
Here is the project in action:


Monday, March 23, 2009

7Segment LED Display

After getting the micro controller (uC) work, now it is time to start building small modules for later use in bigger projects. 7Segment LED is one of the common ways of output when the data is numerical.

I have a common anode 7Segment LED (Red). The 7Seg has seven segments each having to be separately lit up by grounding the appropriate cathode for that segment (actually not necessarily ground, any potential lesser than anode by ~1.5-5V). So, to control 7 segments (using a simple enough circuit), we need to have 7 bits of info, each driving one segment. As the segments are controlled through the cathode, the uC has to sink current from the 7Seg to light up a segment. This is achieved by outputting a logical 0 at the corresponding bit in the uC's output port.

The ideal thing is to connect each of those 7 cathodes to their corresponding output pins through a current limiting resistor of 330ohms. For ease of use and testing, I've positioned the resistor between the 5V supply and the anode. This is much simpler for the proof of concept and easy to wire on the breadboard. The drawback of this approach however is that, the current gets split into each of the lit segments, as a result the brightness of the segments vary based on the number of segments lit (1 being the brightest and 8 being the dimmest). I don't really care at this moment, given that I know the reason.

That's all the about the hardware side. The software needs to output the correct bits at the output port to display a digit on the 7Seg. Each digit is displayed by lighting 2 or more segments in the 7Seg. I've created a static mapping between the digits (0-9) and their corresponding segments-to-be-lit. Now, based on the number to be shown, the software outputs the bits and the digits appear on the 7seg. To keep it appealing, I've made the program to display the last digit of a running counter (as usual, a sleep between the increments to keep it visible to the eye).

Here is the code:

/* Author: Gerald Naveen A (ageraldnaveen at gmail dot com) */

#include <avr/io.h>

#define F_CPU 1000000 // 1MHz
#include <util/delay.h>
#define G_SEGA (1 << 0)
#define G_SEGB (1 << 1)
#define G_SEGC (1 << 2)
#define G_SEGD (1 << 3)
#define G_SEGE (1 << 4)
#define G_SEGF (1 << 5)
#define G_SEGG (1 << 6)

uint8_t seg7_map[10]= {
G_SEGA | G_SEGB | G_SEGC | G_SEGD | G_SEGE | G_SEGF, // 0
G_SEGB | G_SEGC, // 1
G_SEGA | G_SEGB | G_SEGG | G_SEGE | G_SEGD, // 2
G_SEGA | G_SEGB | G_SEGG | G_SEGC | G_SEGD, // 3
G_SEGF | G_SEGG | G_SEGB | G_SEGC, // 4
G_SEGA | G_SEGF | G_SEGG | G_SEGC | G_SEGD, // 5
G_SEGA | G_SEGF | G_SEGG | G_SEGC | G_SEGD | G_SEGE, // 6
G_SEGA | G_SEGB | G_SEGC, // 7
G_SEGA | G_SEGB | G_SEGC | G_SEGD | G_SEGE | G_SEGF | G_SEGG, // 8
G_SEGA | G_SEGB | G_SEGC | G_SEGD | G_SEGF | G_SEGG // 9
};

void seg7_write_digit(uint8_t d)
{
if(d > 9)
d = d % 10;

PORTD = 0xFF ^ (seg7_map[d] & 0xFF); // output logical 0 to light that segment
}

int main()
{
DDRD = 0xFF;
int i = 0;
while(1)
{
seg7_write_digit(i++);
_delay_ms(400);
}
return 0;
}
Here is the circuit in action:



This code I wrote is useful to drive one 7Seg LED; the next job is to drive more than one 7Seg LED -- yes it is different. See you then.

Saturday, March 21, 2009

Hello AVR!

Finally, my first AVR micro-controller based project is ON! I always had a great passion for embedded electronics, but never had a chance and guidance to improve. This is a first step towards that -- thanks to the Internet for a handful of articles.

After a week's struggle to setup the whole environment, I managed to successfully flash my first program into my ATMega8 micro-controller and use it to drive 2 LEDs. The power of the ATMega8 is just amazing; with very little power consumption, the features it provides for embedded applications is just too good (In a 28pin PDIP packaging, it has around 23 I/O pins, 6 channel ADC, Pulse Width Modulation, Programmable USART, ISP, 3 Timers and clocking at 8-16MHz).

Why the struggle:

This shouldn't have been a struggle, if I wasn't unlucky to get a faulty ATMega8. This is my first AVR project and I had bought tonnes of electronic goods starting from multimeter, soldering iron to AVR ISP programmer, ATMega8, crystals, resistors, capacitors, inductors, LEDs....(I've actually bought more stuff which I'm yet to use). After setting up the circuit as required, connecting the micro controller (uC) to the ISP programmer and the programmer to the computer, I was not able to flash my controller at all and that was the problem :( I struggled struggled and struggled to debug every portion of this chain; tried a different ISP programmer (built my own serial ISP programmer) but no use; after achieving no success, the final and the only option was to suspect my ATMega8 uC -- the hero of this project. Anyone would think why it took me so long to suspect this; True. But I did suspect this earlier, however I wished this wasn't the issue because I didn't have a spare one with me and I cannot get this in the nearby electronics shops. Finally I had to personally go to SP road in Bangalore (Bangalore's version of the Chennai's ritchie street) and get a ATMega8. Sigh!!! All said and done, it is finally working :D

This is pretty much a 'Hello World' nothing else. The uC just drives the 2 LEDs I have connected over PORTC through the 330ohm current limiting resistors. To keep it a bit fancy, I made the 2 LEDs to represent the last 2 bits of a running integer counter. So basically the LEDs glow in the following pattern as the integer keeps incrementing -- 00, 01, 10, 11. A 500ms delay between the increments, to keep it visible to the eye.

The code would look something like this (I use the WinAVR cross compiler).

#include <avr/io.h>
#include <util/delay.h>

int main()
{
DDRC = 0xFF; // Enable output on PORT C
uint8_t c = 1;
while(1) {
PORTC = c++; // output the integer on PORT C, whose 0-1bits drive the LEDs
_delay_ms(500);
}
return 0;
}
Here is the Hello AVR! in action:


Friday, March 20, 2009

New forms of telemarketing

Thanks to the NDNC (National Do Not Call registry) in India that the telemarketing's nuisance has become really tolerable. There have been days where I would come out of a meeting just to attend a call from an automatic advertisement system. These days I hardly get any message/calls. Great move! This isn't the news, but...

I am starting to realize 2 new trends in telemarketing to get away with the NDNC regulation.

1. When I call the service provider (bank, phone etc.,) for some query, they make sure they advertise at least one product to me. They make use of the customers like me who wouldn't just disconnect the call once the query is answered, but would finish the call completely (including a 'same to you' for a 'have a nice day' from the executive). One good thing about this approach is that, they have become much more pleasing than before -- that's the way to make you listen to their advertisement at the end. That's technically not violating NDNC!!

2. First one is rather acceptable; as this happens only when "I" call them up. The second type is totally ridiculous. These people create a website (or a page in their existing website) to ask them to call us for more info. These days I see a number of websites which have the option of 'Register for a callback for more info' kinds. As earlier, the companies get hold of a database of phone numbers and names. I "guess" that they also have a separate team which would just generate fake requests at the website with those info from their database. Now they are free to call us, claiming that we asked for the call. To make it so formal and legal, they also send an automated SMS which says something like 'Recently you had asked for more info on xxx. Thanks for your interst in xxx. Within 24 hours, our service representative will call you back'. Clever??!! I really got pissed off when I got such a message last week and without surprise they also called me the next day. I finally confirmed from him that the case was registered with a wrong email id but with correct name and mobile number. When I claimed that I never raised any request, he didn't seem surprised; but was rather more interested in still explaining me the product -- this says it all, that it is a practice, not an accident. He claims it coolly that any of my relatives could have registered for me -- I could only LOL!!!

Wednesday, March 11, 2009

Capturing an image from a webcam using Python

It is amazingly simple to do this. All you need is the VideoCapture library for python and Python Imaging Library (PIL).

The VideoCapture library wraps the interactions between Python and the webcam or any other camera (not sure if this works against other imaging devices like scanners). It is a very simple-to-use library. You can download the VideoCapture library from here.

Python Imaging Library (PIL) is the standard python library for image manipulations. The VideoCapture library returns the captured image as an Image object as represented by PIL so it can be used across many other modules of python just like any other image. Download PIL here.

The following is a simple example app. It captures an image from the webcam and converts it into grayscale and shows it to the user.

import VideoCapture as VC
from PIL import Image
from PIL import ImageOps
import time

def capture_image():
cam = VC.Device() # initialize the webcam
img = cam.getImage() # in my testing the first getImage stays black.
time.sleep(1) # give sometime for the device to come up
img = cam.getImage() # capture the current image
del cam # no longer need the cam. uninitialize
return img

if __name__=="__main__":
img = capture_image()

# use ImageOps to convert to grayscale.
# show() saves the image to disk and opens the image.
# you can also take a look at Image.save() method to write image to disk.
ImageOps.grayscale(img).show()

Simple, isn't it?