Wednesday, December 30, 2009

Tata Sky Plus -- Television redefined

Tata Sky Plus (TSP) isn't anything new that I'm trying to introduce to the crowd, but its extra-ordinary power pushed me to write about it. Even, I knew about Tata Sky Plus right from the time it was launched; knew about the features and felt amazed. But believe me, you need to experience it to appreciate it even further. The flexibility it offers is a real big leap in the realm of television broadcasts.

My USB TV tuner card had somewhat similar features (along with my TVProgramGuide application), but TSP is even better.

The crown of TSP is the ability to pause, rewind, record "live" TV. I know, some might feel that these are unnecessary features or luxury, but I've used it already many times in these 2 days (not because I have it, but because I needed it). It turns out that it is pretty common that we miss some critical scenes while we watch a TV, and ignore an inner urge to rewind it (because there is no way). We just live with it; but we don't have to, if 'we've TSP. I knew earlier that TSP allows the viewers to record one channel, while watching the other channel (yes, it has dual tuners built-in), but I didn't know that it would even allow you to record 2 different channels simultaneously while you watch one of the earlier recorded programmes - this is awesome. It requires a good amount of processing power. With a 160GB hard disk built into the DVR STB, TSP has all its space and power to do wonders -- it's a real multi-processing machine!!

There are many other good features in TSP too that are common across all DTH providers. I've only blogged about what's so special.

With their easy to use UI (I believe so), they have integrated all the features very well. Their user guide was a simple 15 page guide, with cool guidance. That said, the features it offers is a little beyond the understanding of a common man. e.g., a non-techy person "may" not be able to understand and enjoy all the features.

Monday, December 21, 2009

The high beam non-sense

I am getting to feel that the high-beam non-sense has tremendously increased on the roads in India in the recent times. With more people starting to ride on roads, with more and more powerful bikes and cars coming up, this has really become a PITA.

I wonder if (those) people even know that there are ways to control their beams. Maybe people like that extra indicator glowing on their dashboard, without bothering to worry about what it means. Not just cars, these days bikes' beams are also too bright to withstand. I hate these bike manufacturers for providing such big domes and reflectors -- specially the pulsars, unicorn etc., I can really feel that pain in my retina and like everyone else I would struggle to see through. No need to mention the consequences on the ride.

The other day, I started from office in a bad mood (as usual). Was driving my car on the service road in Outer ring road, Bangalore. There was an opposing car with high beams (nothing uncommon). It was really too bright, and with the scarce lighting on the service road, I could only see those two head lights in the whole world around me. Being frustrated already, I didn't want to ignore this. I thought I would at least let him know how it feels. So I just put my headlights on high-beam and drove towards him :) I was happy that he would have learnt a lesson by now; but interestingly, as our cars cross each other, this guy stops his car and scolds me for my high beam and leaves. Hmmm. feeling totally helpless when you are frustrated? priceless!!

Nothing is going to change, until this is seriously considered a traffic violation.

Monday, November 16, 2009

wassup...

I never thought I would run into such a long pause (2 months) in my blog. The reason is very simple : I had too much to write about, and I really didn't want to write all that. I have been (and am) conscious of the fact that I don't want to dump in too much personal info into this blog -- and that's why this silence :) People who know me personally might know how much my life has changed.. okay, this is why I didn't want to write!! shhhh.

I will definitely come back with a lots of useful info, that I came across during the past 2 months...lots to write. stay tuned, this blog is still alive!

Expect few more weeks of silence from my side!

Tuesday, September 08, 2009

Road route update on NH7 between Bangalore and Madurai

This is a status update on the condition of NH7 between Bangalore and Madurai as on end of August 2009. This is an info that every one looks out for before setting out on travel (on their own).

When compared to my previous ride on the same route last year (Dec 2008), the road conditions have improved a lot ie., a longer stretch has now been upgraded to 4 lanes and partially complete work have now been fully done.

Bangalore -> Hosur ==> Well, I don't see any useful improvement on this route. There is lot of bridges being constructed until Electronic city and the traffic is a paid anyways. Not a big stretch, so acceptable.

Hosur -> Krishnagiri ==> Bliss. You will soon forget the kind of trouble that Bangalore->Hosur route gave you. But heavy vehicles occupying both the lanes and trying to overtake each other at 20kmph is unavoidable. Just sit on the horn for sometime until they leave way for you.

Krishnagiri -> Salem ==> Toll gate at Krishnagiri!! Pay the toll and enjoy the ride; the roads are still so good. 100-120kmph easily almost all the time. Dharmapuri enroute goes unnoticed in the quality of roads now. Note: Watch for few one ways/under construction lanes. Yes, I remember few, but are very few.

Salem -> Namakkal ==> Toll gate at Thoppur (just before Salem). Remember to take the Salem by-pass and remember that even the by-pass might look like a city; so never get into the city and think you are on the by-pass.. confused enough? This stretch has improved a lot since my previous ride. I didn't see any issue and was very pleasant. From this point, always be cautious about one ways; you might get redirected once in a while.

Namakkal -> Karur ==> I remember a toll gate somewhere around here. Not sure of the exact location. It was just opening on the day I traveled. I can't tell you the joy I had when I was stopped at the toll gate and was let go without paying anything, saying 'it is opening only from midnight Sir!' :) Good roads, but watch out for one/two redirections.

Karur -> Dindigul ==> This stretch isn't complete yet. The roads are coming up, so you get to ride on one side of the road most times. Specially when you are on the wrong lane (driving on the right side of the highway), make sure to switch on your head lights and put it on "high-beam". Some drivers on high-ways literally sleep!! There was one more toll gate coming up, but this isn't going to open up for now, given the condition of the roads. Use this stretch to relax and do not try to maintain the speeds you did sometime back.

Dindigul -> Madurai ==> This stretch is around 60kms. I am surprised to see that this stretch is gone from nothing to near complete in the last one year. This route is almost totally done till Cholavandhan (~15-20kms away from Madurai). After this point, there is literally no space to get a 4 lane highway into Madurai. I don't see any work happening towards Madurai. So, if you are going to the Madurai city (or not going via Virudhunagar), you need to bite this bullet. This is road is the same old road between Madurai and Dindigul; it's pretty narrow and on peek times, there is almost no way to overtake. Better be patient; dangerous curves.

That said, I think there is a by-pass from Madurai towards Virudhunagar,Trinelveli that starts right after Cholavandhan. There is no traffic allowed in, at the moment but that might be the idea. Coming on NH7, and going through Madurai towards Trinelveli, Kanyakumari would make no sense, given the traffic. And if you are going towards Trichy,Chennai from Dindigul via Madurai, may God save you!! (the ideal route is Dindigul->Trichy->Chennai). If you are reaching Rameshwaram via Madurai, you have a whole city to go through. There is not much that can be done here. The by-pass is way too long to consider.

All in all, Bangalore -> Madurai stretch on NH7 is becoming better and better !!

Friday, September 04, 2009

Dream Big

Courtesy: Pravs World

If there was ever a time to dare, to make a difference,
to embark on something worth doing, It is Now!

Not for any grand cause, necessarily.., but for something that tugs at your heart,
Something that’s your inspiration, something that’s your dream.

You owe it to yourself to make your days here count.
Have fun, dig deep, stretch, Dream Big.

Saturday, August 29, 2009

Caller Location Info v 0.3 for WinMo

Licensed under

Here is the next version of my Caller Location Info app for Windows Mobile (for India).

Release-notes:
1. Includes a bunch of new additions to the mobile numbering. At least 250-300 new numbers added.
2. Includes 2 new service providers - Tata Docomo and Loop Mobile.
3. No changes to the STD list.
4. No bug fixes (no known bugs actually :D)

The installation instructions and other properties remain the same. See the earlier post for that.

Download the CAB installer.

Enjoy!

Thursday, August 27, 2009

Car tyre pressure for long drives

I recently went for a long drive (450kms) at a single stretch. I had the usual question of 'how much air pressure do I inflate in my car's tyres?'. This is the first time I was going all alone for such a long distance, so I decided to understand a bit more about air pressure and do the right thing.

In the Internet, there was no good summary of what is the right thing. I read a number of forums and articles, before I believed I understood. Let me explain a few basics of air pressure so you understand better. It is a well known fact these two external factors affect the tyre pressure:

1. Car's running time: If the car is on the move, the air pressure increases (possibly due to the collision between the air molecules, as they spin at a good speed). So it is generally advised not to fill up air after driving for quite some distance (>2km?) -- because by the time one reaches the petrol bunk the air pressure would have gone up by few psi's (a unit of measurement of tyre's air pressure). If there is no other means, then it is advised to leave the car at rest for appropriate amount of time before filling up air (this is mostly impractical). OR fill up few psi's (2?) more than what you intend, to account for the expansion.

2. Ambient temperature: This is straight forward. Air expands on heating -- thus the pressure inside the tyre is proportional to the temperature. So it is advised to fill up air in the morning or in the evening when the temperature has cooled down a bit. This is the right thing because, the recommended air pressure is always the "minimum" air pressure that is recommended for the tyres for that load. This is why the values change from car to car even if the tyre properties are the same. The maximum pressure a tyre can withstand is usually embossed on the tyre itself (usually in the range of 44 psi, in India).

Based on all these facts, during a long drive, it makes sense to expect the tyre pressure to increase heavily. As a result a common misconception is to fill up few psi's less than the recommended. Unfortunately, there is a logical explanation that supports the common misconception -- I had a similar opinion earlier. However it turns out that this is "wrong". At reduced air pressure, the area of the tread that is in contact with the road increases -- this gives better comfort, but poorer handling of the vehicle. Due to the increase in the area of contact, the heat generated at the tread increases -- at a long run, this leads to a faster wear and tear of the tyre tread and poor control. An already worn out tyre might even burst at high speeds -- not to mention what happens to the driver.

To add to it, when I reached home (after 450kms) and measured my tyre pressure again (if you don't have a tool, get one for long drives), 2 psi had vanished from all my tyres!!! Now, this also means that on a long drive, due to the pressure on tyre (bumps and jumps), the air had also leaked gradually (all 4 tyres of mine are brand new and also have nozzle caps, nothing to suspect on the tyres). Watch out, so you don't go below the recommended pressure mid way of your drive.

Usually the recommended air pressure is much lesser than the max pressure the tyre can withstand (for eg., for my car, the max tyre pressure is 44 psi, and the recommended is around 30 psi) -- so on a long drive it is advised to inflate the tyre to a few psi's more than the recommended, for the reasons mentioned above. I had inflated to 34 psi for this drive.

Understand, inflate and have a safe drive!!

Disclaimer: That said, I am not responsible if there is any unexpected event due to the increased pressure. Use your own conscience to validate the info above.

Wednesday, August 12, 2009

Dangerous Windows Explorer options

If you are a Windows user and in Windows Explorer if you do not have the file extensions visible (option: Hide file extensions for known types) and also have the habit of viewing the files in any mode other than 'details' mode (Thumbnails, Tiles, Icons, List), then you need to be definitely be aware of this vulnerability awaiting you.

Last week, I plugged in one of my pen drives into my friends comp and noticed that there was an extra folder (in the name 'New Folder'). I was sure I didn't create that, but was just curios as to how it got created. The apparent reaction was to click on the folder to see what files it has. I click on it, but nothing happens, the folder doesn't open. This is when I realized the possible trap.

After analysing, it turned out that my friend's comp was already infected with a virus; and I guess the virus automatically copies itself to any removable media attached to the comp. It spreads itself onto removable drives and creates autorun.inf to get control on the next comp where the pen drive is inserted (as explained by my earlier post). When that explains why the 'new folder' was created, it was still unclear as to what was inside it. Later, I figured out that, that Windows Explorer was configured (by default) to not show file extensions, and that the view mode was also tiles mode -- so some otherwise-apparent things have gone missing and before we could realize, the damage is done. It turned out that, the 'New Folder' was not a folder/directory, but an application with the application icon set exactly the same as a normal Windows Folder icon. See it for yourself.



In this scenario, MyFolder is an application, while MyFolder2 is a real folder -- Can you spot any difference?? Absolutely not. An immediate reaction for anyone would be to open the new folder, but end up executing the application!! This is a real danger.

Then I disabled the 'Hide extensions for known filetypes' and changed the view to details mode; Now you should spot the difference:



The application in the picture was created by me on my dev setup for testing; it is totally harmless. Apparently when any application has its icon set the same as 'Windows Folder', McAfee jumps in and tags it as a 'W32/Generic.worm.b' virus. Even my test application was caught promptly -- not bad.

So please be aware of this and think twice before clicking on anything from a removable drive (even if it is a folder). If the computer was not infected earlier, all it requires is a click to get infected (and as I had mentioned in my previous post, do not let autorun kick in anytime you insert a removable drive). It is a good practice to show the extensions all the time (unfortunately, Windows Explorer hides it by default :( ). The other good practice is to create 'system restore points' regularly, so you can get back to a clean state if required (this shall not be 100% effective for all cases).

Wednesday, August 05, 2009

Spam or not?

Whenever I receive any "interesting" spam, I have the habit of investigating and tracking down the sender and trying to analyze the motivation of the sender. This email caught my attention in the same way.

See the email for yourself.



Yes, that is all it had. My initial reaction was that, the hacker sender was a amateur so he didn't know how to make the mail look legitimate -- but not for so long, when I discover that this email was totally legitimate and was indeed sent by Standard Chartered Bank - SCB (Unless!! : read the epilogue of this post).

Ok, let's go through the email. The email is poorly formatted (maybe spam?). The only useful content is the 'Click here' link and it points to something like http://pop4.mailserv.in/sc/lt.php?id= eh8IBgAGA19XRAwETAA6XweWkKK (more and more like spam). I clicked on this link, and I was taken to a page that looked exactly like SCB's site; it didn't take me long, before I figured out that the page was actually the real SCB inet banking login page, and not a fake one!! I verified the SSL certificates and they are valid, trusted and belong to SCB (Thanks to the further confirmation from Firefox that it said I had visited this site more than 100 times earlier -- 100 is just an illustration, don't try to guess anything). At this point, I had no answer. If that was a spam, why would I be redirected to the bank's page; and if it was not a spam, why would a bank send such a suspicious email and redirect to a login page through a third-part link??!!! Instead of speculating, I thought I would analyze the technical aspects of this email first.

Given that the link didn't point directly to the bank's site (but to mailserv.in), I first verified if sc.com (see the from address of the email) belongs to SCB. It turned out that sc.com is legitimate and registered against SCB's head office in Hong Kong. Now that sc.com is valid, I verified the email headers to check if the email was indeed sent from 'sc.com' domain. The email had come from an MX from cleanmail.in and the return path is to sc.mailserv.in. Now, it makes sense why the link was pointing back to mailserv.in. At this moment, I thought it was a spam originating from mailserv.in. But when I digged out more details, I was shocked. mailserv.in belongs to a legitimate email service provider registered in Mumbai. When I went through their customer lists, I started to believe that this email is legitimate -- all of its customers are well known institutions in India including a handful of banks (Interestingly, SCB is not listed as one of them). But a list of customers of this grade, made me believe that an email from mailserv.in would not be a spam.

One last thing I still wanted was, to take a look at how the redirection from pop4.mailserv.in to SCB's inetbanking site happens -- just to ensure if there is any injection of any XSS stuff. I did a wget on the given URL, pop4.mailserv.in just returns an HTTP error code 302 (meaning Moved Temporarily) and redirects to the SCB's legitimate page. This was a clean redirection and this solves the last question, and the sender has no "hacking" benefit out of this.

After all this, I finally believed that this mail was legitimate and not a spam. I am really depressed with the kind of security implications that such an email would cause. If a legitimate institution can send a spam-like email, why wouldn't it be easy for a spammer to send a legitimate-like email and deceive the user??!!

I still "wish" this to be a spam (I just can't believe a bank would do this!!); If it was a spam, the only benefit for the sender that I can "speculate" is: Maybe the sender is tracking the number of users who actually click on this link and navigate. Maybe the sender would send a number of such legitimate messages, and then suddenly a phishing email, so the user doesn't realize the difference and gets trapped. I can't think of anything else.

Any other thoughts?

If you enjoyed reading this analysis, you might also be interested in my analysis of another interesting spam I received.

Disclaimer: I've no confirmation from SCB that it is legitimate email. So it could still be a spam. Use your own conscience and decide it for yourself.

Friday, July 31, 2009

Booting Linux live from mobile phone

== If you had landed here thinking this is about booting Linux on your mobile phone, "NO". This is about booting Linux on a comp/laptop from a mobile phone ==

The concept of booting and using Linux without having to install it on hard disk (aka Live CD) has been there for years (at least 10?). Thanks to Knoppix -- the pioneer in this approach. This later evolved to booting a live CD from media other than just CDs, like pen drives etc., With the later BIOS, supporting USB devices in the boot list, this had become pretty handy. I was a big fan of Damn Small Linux (DSL), which is really a damn small linux (with just a 50MB foot print) and goes almost invisible on your pen drive. I used to happily carry around DSL on my pen drive 2-3 years back.

But hold on. Why do I need to carry a bootable linux on my pen drive?? Anyways I need a comp to boot it; and the comp would anyway have an OS installed. Then why? True, but it is handy. I primarily see this useful for 2 purposes:

1. To use it as a recovery tool if something terribly goes wrong with my comp -- I do backup my master-boot-record (MBR) and the partition table (pretty easy to backup/restore from linux) etc., so I can recover my PC if something goes wrong at that level. This is also useful to analyze any comp for that matter if that fails to boot.

2. I can carry a set of applications along with me. If I have a comp infront of me, I would like to have a C/C++ compiler on it, maybe python interpreter and sometimes Office suite (MS office or open office). I cannot expect this everywhere I need it. Well, my own personal comp in my home town (one of the powerful ones I had during my Engineering with 64MB RAM and 500MHz processor :D) now barely has anything useful in it. It does not have most of the applications that I would need for today; and some times it does not even boot when I need it to :) No photoshop, python, games etc., Carrying a linux satisfies all (at least, most of) these requirements.

This being so useful, the major setback is the necessity to carry around that pen drive all the time; this drawback supersedes and suppresses all its advantages, and I mostly did not have my pen drive with me when I needed it ; And at some point, I forgot which of my 'n' pen-drives had the Linux live installed -- and that was the end to my use of this approach.

Recently, this thought struck my mind -- Why shouldn't I use my mobile phone as a pen drive, as I carry it all the time. And now that I have a Windows Mobile phone, I was really interested to see my "Windows" phone striving hard to help me in booting Linux on my comp :). But, I wasn't sure if that would work, without having to have a dedicated memory card. I was very clear that this is useful only if I can use the memory card for any other use on my mobile, like earlier. I tried various flavors including Fedora, DSL and Knoppix. My first choice was DSL -- it being so small, but that failed to boot off any pen drive on my laptop and my desktop (Gave up! maybe it does not support a variety of hardware?). Fedora 11 was the next choice. I used this live USB Creator, but that failed to boot too -- I didn't spend much time on it. I thought I would try out the legend Knoppix and it just worked effortlessly. The only important thing to notice in this project is, that we need to boot Linux off a FAT16 drive. The knoppix live CD comes with the isolinux boot loader that operates off an ISO -- but that wouldn't help us here. Thankfully, syslinux is a boot loader that does this job for us.

So, here is what you need to do if you need to boot Linux from your pendrive or Windows Mobile or any other mobile that supports Mass-storage mode.

On Windows: (TRY AT YOUR OWN RISK!!!)

1. Download Knoppix Live CD ISO image.
2. Download syslinux.
3. If mobile, put your Mobile in USB Mass-storage mode and connect it to your PC (else connect your pendrive to your PC).
4. Extract the Knoppix ISO to a folder say C:\MyFolder (Many software could do this including WinZip, 7Z etc.,)
5. Copy all the files from the C:\MyFolder\boot folder to C:\MyFolder\ (ie., bring the files inside boot folder to the parent directory).
6. Rename C:\MyFolder\isolinux.cfg to C:\MyFolder\syslinux.cfg (thankfully the config files are similar between isolinux and syslinux).
7. Delete the isolinux.bin file from C:\MyFolder\ (we don't need this).
8. Now copy all the files from C:\MyFolder to your mass-storage folder (say G:). Note: Directory structure should be such that all files in the C:\MyFolder should be in the root directory of your mass-storage drive.
9. IMPORTANT: Be very careful at this step. If you give a wrong drive letter, you may spoil your computer from booting. Open up a command prompt. CD to the folder where you have syslinux and run 'win32\syslinux.exe -ma G:' (I assume G: is your mass-storage drive).

You are all set. Make sure you have USB removable device / USB HDD in the boot list (with priority ahead of your HDD) of your computer. If all done well, connect your mobile/pendrive to your comp and reboot; you should see Knoppix booting off it.

Here is my Lenovo T400 Laptop booting Knoppix from my Windows Mobile ASUS P320: (The video is little long, please feel free to forward if you feel bored; but I want to provide even granular details for the interested, so didn't strip it down).



Have fun!

Friday, July 24, 2009

Windows Mobile Mass-storage drains battery

I recently discovered that my Windows Mobile (ASUS P320) phone drains battery if the USB connection setting is set to 'Mass storage'. In fact it terribly drains; almost half its usual time. It drains battery even if the phone is not connected over USB to any host and even if the phone is in sleep mode. Horrible and unexpected!!

So, in case you have a Windows Mobile phone and suffer with pretty poor battery performance (less than 1.5 days) check if you had changed the USB setting to 'mass-storage' (Start->Settings->Connections->USB). Set it to 'activesync turbo mode'. Change to mass-storage only when required and change it back. It is quite likely that this problem is specific to Windows Mobile 6.1, but I would not be surprised if this issue exists in other versions.

Also, many people complain about WinMo phones switching off much earlier than the battery becoming totally empty. It is important to understand that phone switching off itself on low battery is for your own benefit; if a WinMo runs totally out of battery, it is as good as a hard reset (you lose everything in your phone memory including applications (on phone mem), messages, contacts etc.,). Ideally it switches off when charge in the battery goes below 10% and you can still boot it back again in emergency (it tries to switch off again); I've even made calls at those times. Also, even if the phone is switched off, it still uses battery to keep the memory contents alive; so the remaining 6-10% is reserved to preserve your data until you get to a charger. So it is better not to force boot. I think, usually there is also a small internal backup battery, to support changing of phone batteries without losing data, but that's not going to last long.

Thursday, July 16, 2009

TVProgramGuide -- developer's view - #2

This is a follow up of my previous post on developer's view on my TVProgramGuide application.

There were 2 APIs identified : InitIR and GetIRCode

The next step is to identify the return types, calling convention, parameter list and types of these two. Let me explain how I discovered them for one of those APIs (GetIRcode - the difficult one) by reverse-engineering their disassembly.

Disassembly of ThatDll!GetIRCode

10001190 83ec0c sub esp,0Ch
10001193 8d442400 lea eax,[esp]
10001197 53 push ebx
10001198 8d4c2408 lea ecx,[esp+8]
1000119c 33db xor ebx,ebx
1000119e 50 push eax
1000119f 51 push ecx
100011a0 c6442410c0 mov byte ptr [esp+10h],0C0h
100011a5 c644241102 mov byte ptr [esp+11h],2
100011aa 885c2412 mov byte ptr [esp+12h],bl
100011ae 885c2413 mov byte ptr [esp+13h],bl
100011b2 885c2414 mov byte ptr [esp+14h],bl
100011b6 885c2415 mov byte ptr [esp+15h],bl
100011ba c644241601 mov byte ptr [esp+16h],1
100011bf 885c2417 mov byte ptr [esp+17h],bl
100011c3 c744240c08000000 mov dword ptr [esp+0Ch],8
100011cb e870ffffff call ThatDll!SendVendorCmd (10001140)
100011d0 3bc3 cmp eax,ebx
100011d2 5b pop ebx
100011d3 8b542410 mov edx,dword ptr [esp+10h]
100011d7 750c jne ThatDll!GetIRCode+0x55 (100011e5)
100011d9 8a4c2404 mov cl,byte ptr [esp+4]
100011dd 880a mov byte ptr [edx],cl
100011df 83c40c add esp,0Ch
100011e2 c20400 ret 4
100011e5 c602ff mov byte ptr [edx],0FFh
100011e8 83c40c add esp,0Ch
100011eb c20400 ret 4

Calling convention:
An easy technique to identify the calling convention is to look for the 'ret' statement @25,28 (I would also advise to double check with the caller's next instruction disassembly to make sure he doesn't play with the stack pointer). In stdcall calling convention the callee is supposed to free the stack space for arguments. I think here we are debating only over stdcall and cdecl calling conventions. So, if the 'ret' statement has any value given as operand (no. of bytes to free up on stack), then the calling convention should be 'stdcall'. In most cases, DLLs are stdcalls -- and this observation ensures that for this dll.

Return type (and out params):
In this case, we had already discovered that the technical return type is an int (and was returning 0x0 on keypress and 0xff on no keypress). However, we are still lacking the keycode when a key was pressed. If you look at the epilogue of the function, there are two branchings (clearly two rets @25,28). Please note the "mov byte ptr [edx],0FFh" @26. This looks like an error case, when SendVendorCmd failed (@21). A close look at the diassembly (@18,21) reveals that this code flow occurs when the return value of SendVendorCmd is non-zero (!=ebx); it should also be noted that the return value of GetIRCode is the same as the return value of SendVendorCmd (note that there is no change in eax after the call to SendVendorCmd). If you look at the success path(@22,23), an out parameter of SendVendorCmd (@[esp+4]) is copied on to the address in edx (note the byte ptr mov -- the out param value is so an unsigned char).

Argument list and types:
We are almost done. The only missing piece is to figure out what is edx pointing to. This is a crucial and challenging part. Please bear with me. The statement 'mov edx, dword ptr[esp+10h]'@20, means that the address of the parameter is on the stack. The return statement denotes a 4 byte cleanup on the stack; so it is likely that the fucntion takes only one parameter and that is a pointer to a byte (unsigned char*). However, it is not clear if the [esp+10h] belongs to the local stack variable in this function or is really a argument pushed by the caller -- use of ebp might have been much clearer, but we don't have a choice here. Looking at the disassembly of SendVendorCmd (ret 8) tells me that it uses 8 bytes on stack for arguments. So after the call to SendVendorCmd, the esp will be less by 8 bytes. Now if you carefully account for all the push and pop instructions in this function before [esp+10h]@20, you would find out that the [esp+10h] is indeed pointing to [esp_0+4h] if esp_0 is the esp at the time of entry of the function. [esp+4h] at the entry point clearly skips the return address and lands on the first argument to the function.

And hence the function is 'int (__stdcall *ThatDll!GetIRCode)(unsigned char*)'

I believe I don't have to mention about the InitIR API. But that was pretty simple; the prototype turned out to be 'void (__stdcall *InitIR)(void)' :D.

Now, how to I use ths info to dynamically hook onto the existing TV tuner application is the only critical missing part. Stay tuned!

Sunday, July 12, 2009

TVProgramGuide -- developer's view

=== this post is for educational purposes only. please do not apply these concepts to hack into or do illegal stuff ===

As promised earlier, here is my post on what's behind my TVProgramGuide application.

For the ones who do not have the background on the topic and for the ones who did not read my post on my application -- I had a TV tuner hardware and an application that works with it. I could use my TV tuner remote to control the TV. Now I need to find a way to hook into this design and capture the TV remote key presses, so I can use it in my own applications (important: without affecting the TV app's functionality). I'm keen in mentioning only the critical and difficult portions of this app.

The whole issue can be split into multiple major issues:

1. Finding out the DLL and the APIs that the current app uses to read the remote key presses.
2. Reverse-engineer the APIs and find out their calling convention, return types and parameter lists (and types) -- you would definitely need if you are going to hook into the APIs.
3. Find a means to hook/patch the functionality to allow both the TV app and my app to capture the strokes -- multiple options available. Read on.
4. Decipher the codes to match the real keys on remote -- if 1-3 are complete, this shouldn't be difficult.

Let me talk about each one of them in detail.

Finding out the DLL and APIs:


The TV application and the tuner hardware are from different vendors. This have me the hope that there might mostly like be a dll which provides the set of APIs to talk between the two. Using dependency-walker I found the list of modules that the TV tuner application was depending on. I filtered a set of non-system DLLs that got installed along with the TV tuner application. Then listed down the "exports" table from each of those DLLs and looked for any reasonably named API that relates to this. In one of those DLLs (I'm not disclosing the name of the DLL to keep this hack anonymous), there was an API named 'GetIRCode' -- having known that remotes work on Infra-Red (IR), this was suspicious. There were other APIs named 'InitIR', 'GetOneButtonStatus' etc., which seemed more and more closer towards the functionality I was looking at. I was almost sure.

Here's the export table of that identified DLL :



To make sure if these are indeed the APIs that I was looking for: I attached the debugger (windbg) to the TV tuner application and added breakpoints to all APIs in that DLL. As the application starts, I got a breakpoint hit in Dll!InitIR. Makes sense. Then, I could see breakpoints continuously being hit on Dll!GetIRCode (yes, continuously). I just realized that there was no callback mechanism, and the application continuously polls for keypresses by calling GetIRCode (ahem!, waste of CPU). But is it really what I think? Just to make sure that this API was doing something useful on a key press, I looked at the return code of GetIRCode after each invocation. It returned 0xff (likely a -1 in signed byte) most times. I set a conditional breakpoint on the return statement of this function to break if the return value is != 0xff (ie., break if register eax != 0xff). I realized from my testing that, whenever I pressed a key on my TV remote, this break point was hit and the return value was 0 -- hmmm, almost there, but where is the key code??? hack isn't that easy :). A good news was that, during the runtime (when I tested with remote key presses), the TV tuner application did not call any other API on this DLL.

So, at the end of this step, I have discovered the DLL and two APIs that I might need to hook into. And also that GetIRCode returns 0 once after a key was pressed (note: I still do not know how to identify the key, just hoping that this API would help) -- no idea about the calling convention, return types, the parameters I need to pass in to these APIs and their types. Way to go!!

Step 2 for reverse-engineering those APIs for calling convention, parameter list/types is a long topic, stay tuned!

Sunday, June 28, 2009

A data alignment issue -- example

=== the examples assume a 32bit compilation ===

Understanding what data alignment is and realizing the need for data alignment is a different topic by itself; I'm not going to write about it as there are lots of them around.

Issue: The first member of a struct need not me located at the starting (offset 0) of the struct instance (yes, assuming there are no virtual functions).

Unfortunately, in most cases, this happens to be true; however the point here is that it needn't be. I've personally seen this behavior recently which led me to write this (albeit on a 64bit compiler).

Consider the struct definition,
//
typedef struct _A {
int b;
} A;
//
the sizeof(A) will be 4. This is trivial. Now consider this struct,
//
typedef struct _A2 {
char a;
int b;
} A2;
//
A2 has one char in addition. Some people might expect the sizeof(A2) to be 5 -- but in reality the sizeof(A2) would be 8 due to the data alignment requirement. So where is the extra 3 bytes (called padding) gone? let's examine the offsets of the individual data members to figure out the gap.

Assuming a2 is an instance of A2,

offset of A2::a => (char*) &a2.a - (char*) &a2; // offset of a2.a from the starting of a2 => 0
offset of A2::b => (char*) &a2.b - (char*) &a2; // offset of a2.b from the starting of a2 => 4

Clearly a2.a starts from the zeroth byte, and a2.b starts at the fourth byte. The layout of the struct is as follows {a2.a|*|*|*|a2.b|a2.b|a2.b|a2.b} where * represents the padding bytes and each | represents a byte boundary.

It is important to note that, C/C++ standards do not allow the compiler to change the ordering of the struct's members in its memory representation (please let me know if someone feels this is wrong). However, if you think a while, you would realize that without any change to the ordering of the members, the padding can be moved around while still fulfilling the data alignment requirement.

For eg., the memory layout of A2 could also have been {*|a2.a|*|*|a2.b|a2.b|a2.b|a2.b} where * represents the padding bytes and each | represents a byte boundary. This is perfectly valid and easily invalidates the assumption about the address of first member of the struct -- because the offset of a2.a is now 1 instead of 0.

ok, but why would someone rely on this assumption??! Pbly not directly; it does not make sense to use a2, where a2.a is to be used. However, in nested structures, this might go unnoticed. Consider this scenario,
//
typedef struct _A3 {
char a;
void *ptr; // assume that by design, ptr points to A4 or A5
} A3;

typedef struct _A4 {
char c;
} A4;

typedef struct _A5 {
char c;
int n;
} A5;

void print_members(A3 *pa3)
{
// assume by design:in most cases pa3->ptr points to a A4 instance.
// and given that both A4 and A5 have the common first member
// it might be tempting to write a code like following.
A4* pa4 = (A4*) pa3->ptr;
printf("%c ", (char) pa3->a);
printf("%c ", (char) pa4->c); // here the code is trying to print A4::c or A5::c
if(IS_A5(pa3->ptr))
printf("%d ", (int) ((A5*)pa3->ptr)->n);
}
//
The code at line 23 may or may not work as intended, based on the result of data alignment for the struct A5. This is a perfect disguise of this untrue assumption. So, Beware!!

Tuesday, June 23, 2009

TVProgramGuide - my application for tv tuners

This is about an application that I've recently developed, that can show real-time TV programme guide lines on the fly over a TV tuner video.

It is really interesting to see how subtle things can make a big difference in the way we carry out every day life. This idea struck my mind few weeks back, when I landed at entertainment.oneindia.in accidentally while trying to find out what movies are played for the day. The idea was to integrate this info about TV programmes into the existing TV tuner application, so I can fetch them whenever I need them. When I was first thinking about this idea, I didn't really expect it to be soo useful -- today I just can't watch TV without the aid of this app.

The idea sounds interesting but it is as vague as a patent. I spent the first week thinking about the feasibilty of this application, and about how to integrate this app into the TV system. I was pretty clear that this app is going to be of no-use if I cannot provide a means to use this application wirelessly (yes, using the TV remote). If I have to come to the comp, and use the mouse to find out 'what's coming up next?', I would rather visit the website in my browser and know about it. It is as simple as having a bookmark in my browser.

But I wasn't sure if I can hook into the TV remote, and get its signals. Even if I can, I should also ensure that my hooking does not affect the normal functioning of the TV tuner application, that is already using the TV remote and reacting to the signals. As specified in my earlier post on my TV tuner, the tuner's hardware (Trident) and the TV tuner application (Honestech) seem to be from different vendors -- this gave me the hope that there should be an interface available somewhere (although unpublished) using which the existing TV tuner app is receiving the TV remote key presses. I will definitely be writing a separate post on the technical details of how and what disassembling I had to do; but for now, it is that I've managed to discover the undocumented APIs that are used internally, and the appropriate DLLs and managed to hook into them seamlessly so the TV tuner application has no idea that I'm hooked in.

This hooking was done in C/C++. The remaining task was to download the TV programmes from their website (note: oneindia does not have the list for all channels. So I had to do a generic design to support any website; with abstract classes and interfaces, this isn't a problem anyway). For any non-system programming I prefer python (if not UI related) and C# if it has an associated UI. I admire the power of the modules/class libraries that these platforms provide; awesome! .Net comes handy with Http classes (System.Net.HttpWebRequest) to handle the HTTP requests/responses for downloading the programmes. And I used RegEx (System.Text.RegularExpressions.RegEx) to parse the HTML output and extract the program schedule. With a number of choices for sharing info (remote key press) between the C/C++ hook and the C# interface, I chose the simplest one: The windows registry. I intend to post a developer post on this later, but that's the overall technology behind this app.

Important features:

1. Automatically shows the 'now playing' programme on every channel change. The 'now playing' item is picked based on the current channel no. and the current time. This window shows up for 10 seconds and auto-hides. Very useful when we glance through the channels.
2. Channel change is detected by monitoring the remote key presses including numbers, up, down, recall etc., Interesting part was that channels can be changed by a sequence of key presses eg., key 1 followed by key 2 followed by key 3 in a short interval, is not 3 different channel changes, but a single change to channel no. 123. It had to be handled differently.
3. A special mode can be entered by a special key combination (that does nothing to the tuner app) in the TV remote. In this mode, the app overlays the 'coming up next 5' programmes list over the video. This info does not auto-hide. It can be closed by the Mute key on TV remote.
4. Seamless integration, so TV tuner works just well as before.

Here is the video showing my application in action as I watch TV on my comp:



Overall, I'm very happy with this app; one of my applications that I use the most. I'm hopeful to publish this app soon, after I make some generalizations (currently channel associations are hard-coded and not all channels are supported).

Wednesday, June 10, 2009

Fronttech TV tuner

After contemplating on this for quite sometime, I finally decided to buy a TV tuner. I don't intend (or I should say don't want) to spend too much time on TV. I was fairly confident that I was not going to buy a TV for sure; and if at all I go for something that would be a TV tuner card for my existing LCD monitor.

Then, the question was whether to go for a internal TV tuner, USB TV tuner or an external TV tuner. For those who aren't aware, there are 2 types of external TV tuner -- one that drives a CRT and the other that drives a LCD. So be sure to buy the correct one incase you have a LCD monitor. However, I didn't choose this because: I don't want to keep changing the cables to the monitor everytime I need to switch between TV and comp (my monitor does not have dual input); and I might want to parallely work while watching TV (or atleast check my stock prices, email, orkut). These being the problems on external ones, I was firmly confident that the internal/USB ones will have lot more "features" than the external ones like recording, PiP (Picture in Picture), scheduler etc., and given that it was software controlled, I was hopeful to try my hands on it by writing some code for it (I had no idea about what to do with it, but just felt an opportunity).

After throwing out the external solution, I had to choose between the internal one or the USB one -- both satisfy my criteria of software control. I chose USB as it looks safer to me anytime than a PCI interface -- I'm hopeful that a heavy spike on my cable wire, would cause less damage to my comp over the USB port. It also provides the convinience of moving the TV between my comp and laptop. Also, the internal ones might have difficulty with remote due to line-of-sight issues (not sure, if they even provide a remote for internal ones).

Anyways, after all this I finally zeroed on Fronttech USB TV Tuner (to be frank, I didn't really spend time on analyzing the brand to buy; Fronttech was easily available in the nearby shops and so I just went for it). I had bought it a month back and believe me, I have no regrets today. I think Fronttech is anyway just selling it as its own brand but it is not. The hardware driver identifies itself as 'Trident Analog Video' to Windows and the TV tuner software is from Honestech. Possibly, Fronttech takes care of marketting, sales, service/support.

It is so handy, fits in your palm. This is how it looks (Photo from Fronttech).


It comes with the following key features:
1. Real-time recording (at a click of a button on remote)
2. Recording scheduler (I can just schedule a recording and forget about it; watch it later).
3. Time-shifting (as they call it -- more later)
4. Supports NTSC and PAL

Unfortunately, PiP is not available, and I later learned that it is not available in any TV tuners (at least in the ones available here in India). As it requires support from the hardware level (tuning in, two frequencies at the same time), I cannot do any magic with software to emulate PiP. Ok forget it.

Recording is awesome, with no lag in the TV playback in real-time. There are options to select the resolution of recording along with the video encoding that is to be used.

I should definitely mention about the time-shifting. An easier way to describe the time-shifting is 'pause your live TV' slogan that many digital TV ads boast about. Yes, this feature allows you to pause a live TV and continue from there. This is technically feasible (just like anyother digital TV solution) by continuously recording the TV channel in the background. One very good thing about this TV tuner is (IMHO), I can enable time-shifting only when I need it; this prevents the application from continuously thrashing my hard disk when I really don't need to pause/playback (I believe Tata Sky Plus does continuous recording).

Another "strange" thing I discovered was (at least I didn't know this earlier): The auto-scan option for channel tuning is not as smart as the ones in the real TV. ie., it does not scan the complete set of frequency (at every possible increments) and infer if a clear signal is available at that signal. Apparently, on hooking to their application, I figured out that they have a constant pre-defined set of frequencies to be tuned for a given country (I even remember a function in one of their DLLs, GetFreqencyTableForCountryName). Ok how does it matter to me? It does. They have some 360 channel frequencies pre-defined for Indian Cable TV networks, but not all of them have a clear signal or channel transmitted. The tuner application is not smart enough to discover the absence of the channel and skip bookmarking the channel. So at the end of an auto-scan you will have 360 channels bookmarked, with only around 100 having meaningful video output and scattered all over. Grrr.. There is also no way to tune a single channel. ie., you cannot say, switch to channel 1 and start tuning the band to associate a different channel to channel 1 -- because the channel number by itself defines the frequency to be used, and channel no. 1 will always be pointing to the same frequency. Thankfully, they have the option to name the channels and associate channel shifting to work only on the favored channels instead of just blindly going through all channels. I was a bit uncomfortable here, but after a month, now I'm used to it.

Important: Do remember to backup your "channel.she" file from their installation folder once you configure your TV tuner and named all those channels etc., It is useful to restore the channel list in case you need to reinstall the application or you lose it (I lost it once -- how, is a different lengthy story!!).

Except for these few glitches, over all I'm very satisfied with the quality of its work with some stunning features.

Tuesday, June 09, 2009

Focus On Yourself

Courtesy: Pravs World

Some of us waste our time waiting for people to live up to our expectations.
We are so concerned about what others are doing that we don’t do anything ourselves.

Its not so important what others are upto; compared to what you are doing.
Focus on what you do, your work; Not on others.

Thursday, June 04, 2009

MOTAS - The Mystery Of Time And Space

For those who have played and enjoyed 'Crimson Room' and 'Viridian Room' and still want to quench your thirst for such game, this should be a hurray!!

As described in their website, MOTAS is an online graphic adventure game in which the adventurer has to solve riddles and puzzles, find and use objects, escape from locked rooms, find hidden passages and be a detective and examine everything to unlock the doors of the mystery of time and space.

With 20 levels, this game has lots of fun and thrill. Unlike 'Crimson Room', this game needs a little bit of technical exposure (in multiple levels, you have to boot a computer and get some data out of it). You also need to understand 'time travel' and apply the same in the game heuristics. With little help from the online community, it should be lots of fun in completing all the levels (I believe few, very few problems in the game are very difficult to solve/guess).

Click here to play and Enjoy!

Saturday, May 30, 2009

My pencil arts - #5 - Lady

First 2 are photographed, while the last one is scanned one.





Sunday, May 17, 2009

32bit/64bit programming -- an interesting problem #2

...continued

I was recently looking at the source of an open-source library. The library is supported on all popular platforms in both 32bit and 64bit. When providing a library for 32bit and 64bit platforms a new problem kicks in. ie., to make sure that the applications using this library uses the correct version of the library. ie., a 32bit application should use the 32bit version of the library and a 64bit application should use the 64bit version of the library. Obviously, it is not possible to cross link the binaries of 32bit and 64bit, and so the linker would fail if the application tried to do so. But, the difficult problem here is to restrict the application from using the wrong header files of the library. ie., a 64 bit application can inadvertantly include the 32bit headers of the library and link against the 64bit version of the library -- and it is quite possible that this will succeed even without a warning (although there are cases where this would not work).

Consider this function:
//
void __cdecl messup(struct my_struct *);
//
A 64bit translation unit that calls this function after #including a 32bit header for this function would just link fine with a 64bit library for the same function. The 32bit version of my_struct and 64bit version of my_struct shall possibly be defined differently by the library due to the data-alignment requirement between 32bit and 64bit for performance reasons (padded with extra bytes?). Thus the application assumes a different structure while the library expects a different structure. This might lead to crash. Aah!

Now that's bad. So what does it finally mean? It does mean that appropriate headers are equally important as the appropriate binaries, but unfortunately lacking the support to enforce from the building tools. To take this problem one step further, given the various data models within 64bit platforms, it is not just the platform that matters, but it is the data model.

To redefine the problem again in its final form: An application that is being built on a X data model should include the headers and libraries that were built for the X data model.

There could potentially be many ways to solve this problem. A quick answer would be to have a common header file for all data models but have ifdef'ed code for each data model in the same file. This has few drawbacks (in my opinion): declarations for all data models need to be in the same file (clutter? maintenance?); it might be very difficult (possible?) to determine the data model in the pre-processor phase, so the right set of declarations go in for compilation (afaik, there does not seem to be a pre-processor directive for the data models and depending on the pre-processor directives for the platform might be too many to handle; what about unknown platforms?).

I was actually impressed by another option that this library I talked about, had used. Actually among the 32bit and 64bit platforms, the predominant data models (LP64, LLP64, ILP32) only differ in the size of long and pointer. This library while generating its own headers (during its build time) puts in the size of the long and pointer, into the header file as it was inferred during the library's compilation. This provides an easier and reliable way of identification of the data model later for which the header was built for.

The header file generation code would be something as simple as this:
//
fprintf(header_file, "#define MYLIB_SIZEOF_LONG %d", (int) sizeof(long));
fprintf(header_file, "#define MYLIB_SIZEOF_PTR %d", (int) sizeof(void*));
//
Now that we have a means to carry forward the metadata of the data model of the library onto the headers, how do we prevent the compilation in an inappropriate data model. The idea used was simple, and should be self-explanatory. The library also added the following code to their header file:
//
static char _somearray_[sizeof(long) == MYLIB_SIZEOF_LONG ? 1 : -1];
static char _somearray2_[sizeof(void*) == MYLIB_SIZEOF_PTR ? 1 : -1];
//
If it isn't obvious, these lines declare an array of size -1 (which is illegal for compilation) incase if the sizes of long and pointer of the application didn't match with the one in the headers. Cool! that's what we need.

I feel that there are 2 tradeoffs I see with this approach:

1. Though the misuse is prevented, the error message isn't friendly. When you use a wrong header file, you get a message saying 'invalid array size' or 'invalid array subscript' or 'an array should have at least one element' etc., One might have to refer to Google to figure out the issue.

2. Two more names are added to the namespace (and 2 bytes) to the current translation unit. Usage of underscores and uncommon names might almost avoid a possibility of a name collision, but still :) I would think of a single struct having one member for each enforcement rule, so there is only 1 symbol added to the global namespace.

Any other solution??

Thursday, May 14, 2009

32bit/64bit programming -- an interesting problem

After being bored of my electronics posts myself, just wanted to write something back in computer science.

Now that 64bit computers have become much common and 64bit programming is becoming a necessity, it has become a need to qualify the word programming with either 32bit or 64bit -- basically because they aren't just totally the same. There have been yesteryear days where we had to qualify 16bit vs 32bit. When I interviewed people in those times, I use to ask them the 'sizeof an integer?' and give them credit if someone asks me back if I was asking about a 16bit compiler or a 32bit compiler (at least if they ask me if it was Turbo C++ or VC++ :)) and a negative mark if the answer was 2 bytes. Slowly the trend changed, 32bit programming started dominating (ie., people had no need/exposure towards 16bit programming at all) and everyone started answering 4 bytes always and I stopped asking that question. Now it's time for the question again :) (btw, I don't claim that the 2byte to 4byte is the only difference between 16bit and 32bit; this was suppose to be a basic question to start with).

64bit programming is complicated in its own ways, primarily because of the inconsistencies in the data models. With a number of data models existing for 64bit (thank God at least only 2 are predominant), it makes it even more complicated. While Linux, Solaris, Mac (and more) are all lined up for a common data model (LP64), Microsoft is as usual onto it's own unique data model (LLP64). Although it is only Microsoft, given the dominance of Microsoft in the OS market, that is good enough to be a compatibility requirement. It is my personal opinion that Microsoft has a point here -- LLP64 invites less changes on 32bit code to become 64bit compatible. And I'm pretty sure this compatibility is going to help MS more than anybody else. Understanding the appropriate data models (and the one that is being used) is important if you are programming on a 64bit platform and it becomes even more important if you want to write code that's compatible with both 32bit and 64bit platforms.

Recently I came across an interesting problem to be thought of, specially if you are writing a library that should be source-wise compatible on both 32bit and 64bit platforms. The problem, discussion and the solution being pretty long, I would talk about it in my next post....stay tuned.

Wednesday, May 13, 2009

LCD Digital Clock

This clock is pretty similar in terms of effort to my previous seven-seg LED based digital clock -- but the outcome is just not comparable. See for yourself.

The only difference between this from my previous clock, is that the display logic now drives the standard 16x2 alpha numeric LCD instead of multiplexing around those 4 seven segment LEDs (infact I don't have to do multiplexing now, so it is even simpler with only one timer as opposed to the earlier clock with 2 timers). I'm not going to talk about the driver code for the 16x2 alphanumeric LCD for two reasons. First, it is pretty complicated to be put in here and would not really fit the audience. Second, this info is available all around the web, it is just the matter of coding the protocol between the uC and the LCD chip.

Here is the LCD clock in action:

Sunday, May 10, 2009

Digital Clock

I have finally managed to build my own digital clock. This is basically 4 seven segment LEDs put together and driven by my micro-controller (an ATMega8).

I had been working on this for few weeks now. The difficult part about making this clock was multiplexing 4 seven segment LEDs. Soldering 4 LEDs to suit the multiplexing circuit was a nightmare. Having a printed circuit on a PCB would be the right way to go; but without it, it is clumsy to build clumsier to debug. I spent a considerable amount of time to get this soldering done -- as it has to be really firm, accurate all within a limited space. Me, not being an experienced guy, it was tough for me. See it for yourself.





Other than this there are only two more hurdles to the problem:

1. Timing a second -- this is the crucial part of the project, although not that difficult. Will explain shortly.

2. Multiplexing 4 seven segments -- previously I had done only two; Also to make that dot (separator between hour and minute digits) to blink every second.

Timing a second:
Usually I clock the uC to run at 1MHz, this time I had clocked it to run at 2MHz (though it wasn't necessary, I thought it might be useful to have precise control and more power to drive 4 seven-segs along with running the clock.).

Anyways, I used a 16-bit counter to measure a second. This counter gets incremented on every cycle. ie., on a 2MHz clock, this counter would get incremented 2 million times a second. This was a bit too much for timing, so I configured the prescaler to bring down the clock for the timer by 1/8th (smallest possible) which is 256KHz (2 ^ 18). Incidentally, it is possible to program the uC to notify you on every overflow of this 16bit counter instead of you checking for an overflow everytime. So the overflow routine would get called for every 2 ^ 16 increments of the counter. With the current clock configuration, the overflow routine should get notified 4 times a second -- this seems good enough to time a second. So for every 4th call on this routine, it increments the seconds counter. The rest is obvious.

Here is the code for the overflow routine:

// g_* are global variables.
ISR(TIMER1_OVF_vect)
{
static int t = 0; // no. of times overflow has happened.

t++;
g_dot_point = (t/2); // dot point stays on for half a second and off for half.

if(4 >= t) {
t = 0;

g_ss++; // increment the seconds
if(g_ss > 59) {
g_ss = 0; g_mm++;
}
if(g_mm > 59) {
g_mm = 0; g_hh++;
}
if(g_hh > 23) g_hh = 0;
}
}
Multiplexing the 4 seven-segs:
If you do not know how multiplexing displays work and if you have not read my earlier post, please consider reading it.

This is pretty similar to my earlier multiplexing code -- just an extension. Now there are 8 data pins (one extra now for dot point) and 4 control lines one per 7segment. The multiplexing is done on the overflow interrupt of a different timer (as 4Hz of timer1 is too slow to multiplex 4 seven-segs). The following code should be self-explanatory.


ISR(TIMER0_OVF_vect)
{
static int n = 0; // decides which digit to update now.(right to left, 0 -> 3)
static int tp[4] = {1, 10, 100, 1000};

int cur_time = g_hh*100 + g_mm;

PORTC = 0;

seg7_write_digit_dot( (cur_time / tp[n]) % 10, // manipulate the appropriate digit
(g_dot_point && n == 2)); // 3rd digit -> print dot if req.

PORTC = 1 << n; // select the right digit by sending the correct control line.

n++; // next digit on next overflow.
if(n >= 4) n = 0;
}

One missing piece in this project is the means to configure the time. The amount of benefit that gives did not excite me for the amount of work required to do that. It was kind of boring stuff. So I have now configured the clock to always start at 13.25 (that is the time I was testing this today), so I can just choose to start the clock at the right time, and then on it just runs fine. Anyways, I can reprogram the clock to whatever time I want to start with. :)

Here is the digital clock in action:

Wednesday, May 06, 2009

Who do you trust?

Courtesy: Pravs World

In life just don’t trust people, who change their feelings with time…

Instead trust those people whose feelings remain the same, even when the time changes…

Saturday, May 02, 2009

Firefox textbox cursor issue

I recently ran across this issue with Firefox (3.0.9).

The problem was with the cursor positioning as I type in any textbox in any website in Firefox. The cursor does not always move as we type, but sometimes moves to the next position (like normally). See how the text gets garbled as I was typing 'download firefox' in a Google search box in Firefox.



This started happening all of a sudden and I had no idea what was the problem. I was initially casual, thinking it was a bug in Firefox and restarted Firefox (with Save and Quit option). When firefox restarted and restored all the tabs, the problem was not gone. I got a bit worried then. I was worried if this was a side-effect of a phishing attack. I checked with other browsers and things were fine; I was at least happy that my computer was not compromised; if at all it was, it was only Firefox. I enabled network watch in Firebug, and tried watching for all outgoing URLs specially from pages where I enter passwords (of course, by giving wrong passwords) but no sign of any malfunction. I also have greasemonkey enabled, so I was worried if any greasemoney script got installed without my knowledge; but no, there were no other scripts other than the ones I have for my own use. Now it was starting to get beyond me; and that's when I remembered I did not "really" shutdown firefox, but only hibernated (save and quit). My only hope was that, there could be some webpage which triggered a bug (possibly in adobe flash player or jvm?) which gets reproduced every time I restore the same set of tabs, thus leaving the restart having no effect on the issue. So I did a clean shutdown of firefox (quit without save) and started fresh; Voila! it's gone now. It never happened again; as of this moment I assume it is just a bug and my data was not compromised! :)

Monday, April 27, 2009

Remote surveillance on your mobile phone

I assume that you have read my previous post on 'streaming webcam using VLC' that describes how to use VLC to stream your webcam's video over the network.

This opens up a new and simple means for surveillance. The idea becomes more interesting and useful based on the network that we choose and where the video is viewed from. To me, if I were to view the video from some other comp, the usability decreases a lot -- unless you are streaming video from home and want to have an eye from your office comp over the Internet; yes, but there are cheaper and better ways to do the same.

I was keen in trying to perform surveillance on a mobile phone and was pretty much fascinated when I could do it. It is really awesome to watch a place in real-time from a remote place and that too wirelessly on a mobile. Now that we know how to stream the video over a network, the only missing link is to figure out a way to establish a network between your mobile phone and your comp.

There are multiple ways to do it:

1. Bluetooth PAN (Personal Area Network): This is the simplest, cheapest and comes at no running cost. Modern bluetooth devices provide upto 100m range, but remember you might have to check with your phone's capability also. I would NOT prefer this as this might tend to disconnect and there is no easy way to reconnect remotely. But it works. I sometimes use it to have an eye on my office cube (for no reason :) ) when I'm just around it.

2. Internet: This is cheaper to establish but has a running cost (specially the data charges on the mobile side are usually hefty). Given that we are aiming at transferring video (atleast QVGA), the bandwidth usage will cost a lot of money; the speed of the network might also be an issue (although a high speed EDGE service on the mobile side might be enough). However, this gives the maximum possible range of surveillance. Literally, from anywhere in the world.

3. Wi-fi: This option is similar to option 1, but much more reliable than a bluetooth PAN. Automatic recovery from signal failures is a plus. I prefer this the most, because my office is fully equipped with Wi-fi. In fact, our other offices (including the ones overseas) are all interconnected, so I can really watch my cube (where I broadcast) from my mobile wirelessly from any of my offices. It's really cool (at least for the first few times). Wi-fi drains battery much faster than bluetooth (as of this writing) though -- so may not be suitable for continuous surveillance.

4. Combination: A combination of these options shall also be applied. E.g., I can choose Internet (broadband) on the broadcasting side, and use Wi-fi (maybe in office?) on the mobile side.

How to view on the mobile:

I'm only going to talk about Windows Mobile here (although I believe the same software is available for Symbian phones too). All you need is a video player for streaming video. Based on the platform you have, you can find one. Note that you need to get a player that supports the protocol and codec you used while streaming.

For Windows Mobile, users can choose to use the free TCPMP (The Core Pocket Media Player) or the professional edition of the same called as the CorePlayer. I personally believe that the CorePlayer is the best for playing streaming video.

Sunday, April 26, 2009

Streaming webcam using VLC

VLC is definitely more than just a video player. It has lot of interesting features and extensions which are not explored by all. By enabling one of its various input interfaces, it is even possible to program against your VLC player -- I had written a clip-list application quite sometime back that automatically directs vlc player to play only portions of a given video (maybe a post later).

I'm not really interested in streaming my webcam but this was actually useful for me for a different reason. I actually started writing a post on that, and felt that this topic is worth a post by itself -- some people might just want to stream webcam.

It's pretty simple.
  1. Start VLC (all my instructions/snapshots will be as of vlc 0.9.6).
  2. Before proceeding further, let us open the VLC's console, so we know if there is any error during the process. To open the console, Menu: Tools -> Add Interface -> Console. VLC will throw log messages into this console.
  3. Menu: Media->Stream (or ctrl -S)
  4. Choose the 'Capture Device' tab (btw, you can stream a video/audio file/DVD using the appropriate tabs)
  5. Under the 'Video device name' drop down choose your camera (you can even stream your desktop by choosing it in 'Capture Mode').
  6. Click on Stream. A new window pops up. This is where you provide the streaming options.



  7. A simple method is to stream over HTTP -- this specially helps to get across firewalls/networks without glitch. Provide the IP address of the interface in which you want to stream your video. Eg., if you have a multi-homed computer, you might want to bind it only to your private network and not your internet IP. Choose an appropriate port of your choice. Even 80 would do.
  8. Under Profile, choose Windows (wmv/asf) -- If you understand, you can opt to choose the right profile as you see fit.



  9. Now click on stream and your video should start streaming. If everything was fine, you should see a 'creating httpd' message in the console without any other relevant error messages following it (sometimes you might not have an appropriate encoder or the port binding might fail etc.,). Also the VLC player UI's status pane should show 'Streaming'.
That's it. Now to view the streaming video on any other machine in the network,
  1. open VLC on any other machine
  2. Menu: open Network (or control - N)
  3. Select HTTP in protocol and the IP address of the machine where you are streaming. The port number stays disabled for me (Workaround: change the protocol to RTP, change the port and change the protocol back to HTTP :) )
  4. Click on Play.

Saturday, April 18, 2009

I wish I were a doctor

I'm an engineer by education and profession; all along I've been happy and so satisfied about it; but not any more.

Engineering explains most of the things that happen around you every day. As I remember
  • 'newton's 3rd law' while walking;
  • 'doppler's frequency shift' from the horn of a speeding car/bike;
  • 'raleigh's scattering' looking at the orange sky;
  • 'frequency spectrum' while on a traffic signal pondering about why red is stop and green is go;
  • 'acetic acid' when the bearer serves me vinegar for the fried rice;
  • 'potential difference' when a crow casually sits on a metallic electric wire;
(and more ...)

....I've always enjoyed my education (as if I were Neo looking at The Matrix :P). No doubt that I still enjoy my education, but there is rather something else that is much more important to understand than all these --- yes, that's our human body.

=== what follows is my own understanding of a disease; do not rely on the information here if you are looking for some critical information on this disease. ===


I was totally devastated when I got to know about this disease (not sure if I can call it a disease) called 'Guillain-Barré Syndrome'; commonly referred as GB syndrome. It is mentioned that this disease is very uncommon and the chances of this disease is just 1 or 2 in 1,00,000. What took me to surprise was the complication of the disease; I never had imagined such a problem was possible. In simple terms, GB-syn is a situation wherein our body's immune system starts destroying our own nerve cells! OMG!!! Apparently there seems to be a generic term for this kind of complication -- autoimmunity. As time proceeds, the disease gets worse, wherein there are too many antibodies generated to act against our own self. A friend of mine is affected by this disease and hence I know about this; the disease progresses at very fast pace -- to given an example, my friend suspected some abnormality on 1st day and went to the doctor; on 2nd day he felt so weak but managed to go the doctor with his friend; on 3rd day he was paralyzed literally and could not move :( The disease is so complicated and the attack is so acute that lack of an immediate medical intervention might even result to death.

In spite of understanding so many things around us, there are things within us which we don't understand and those can put us to stop. To me, if I don't have any idea of how my body works, I don't have to be proud of knowing how a computer works!! after all nothing is more crucial than our lives. Now that it is too late for me to realize or react, I can only wish I were a doctor, to have understood at least a portion of my body!!

One thing is clear that happiness/sadness is subjective and relative. Someone who has GB-syndrome would really not worry about this economic slowdown or losing job or about a huge homeloan on a declining real-estate; there are always worse things in this world; so be happy for what you have and enjoy your days!!

Thursday, April 09, 2009

Building a serial port logic convertor

As mentioned in my previous post, it is not possible to directly connect the serial port pins to the uC's pins due to the difference in logic levels. Let me talk about what is the difference and how we can build a serial port logic convertor (note: i'm just posting the summary of all the information I collected, so it is all available in one place for someone else).

Serial port (RS232) logic levels:
In a serial port, a logic level of 1 is denoted by any voltage between -3V to -25V and a logic level of 0 is denoted by any voltage between +3V to +25V. While, for a uC, logic level of 0 is 0V to +0.8V and logic level of 1 is 2.2V to +5V.

Now, there are two problems to be solved:
  1. The serial port's have a wide operating voltage -- the worst case being 50V
  2. The logic levels are totally different and incompatible with uCs (TTL).
I came across a naive seral port logic convertor which just makes use of a voltage regulator (LM7805) to bring down the voltage to required levels -- but I believe the fundamental assumption there is that, the serial ports work on a voltage above 5V, but this isn't necessarily true according to the standards. That said, it seems that most serial ports work off an unwritten standard with a operating voltage of -12V for logic 1 and +12V for logic 0. But devising a circuit on this assumption is probably going to hit us sooner or later.

A common elegant solution is to make use of a MAX232 IC that does the job for us. I got a MAX232 in PDIP 16 pins (8 + 8). The IC can drive 2 serial port I/O and make them available at TTL. The connections are fairly simple. The following is the schematic that I took from their datasheet.



PIN configurations for a standard serial port:
  • PIN2 -- output pin of serial port (should go into the input pin of MAX232 -- output should be read),
  • PIN3 -- input pin of serial port (should go into the output pin of MAX232 -- input should be sent),
  • PIN5 -- ground
I managed to build my own serial port convertor on a general purpose PCB. This is how it looks after it was soldered.



Under the board (the nasty soldering):


Before I integrate this convertor with my uC and fiddle around, I should make sure this works. Otherwise, it might become very difficult to isolate the problem if I made a software error later. The approach is pretty simple, just short-circuit the output and input PINs on the uC side in the convertor, thus creating a loopback serial convertor. Basically, this circuit will just send back whatever comes in -- in software terms, an echo server.

The circuit can be tested by connecting the circuit to the computer's serial port and then using HyperTerminal in windows to connect the COM port in which it is connected. It is important to choose 'Hardware control' as None. Now if everything goes well, just start typing on the hyperterminal and you should see what you are typing. That proves that the serial port logic convertor works fine (by looping back).

Here is the working circuit in action:

Sunday, April 05, 2009

Computer and micro-controller communication

After my digital thermometer, I thought it is worthwhile to integrate my micro controller (uC) projects to my computer. There are tonnes of advantages that I see by doing this. While working uCs, specially as being a computer programmer, I found it difficult to achieve a number of stuff. Most basic things like input / output aren't readily available (note that I'm not blaming the uCs here; after all we are not writing software here, but building hardware) -- this makes it very difficult to debug or build prototypes. These days I use LEDs (and different blink rates) to debug various scenarios. But imagine how I would have debugged my first LED blink project :) -- there was simply no way. Not just debugging, allowing uC to talk to the computer (when required) opens up a whole new world of communication. There are so many features that become readily available, like
  • keyboard - I can configure my uC parameters at runtime from my keyboard?
  • monitor - I can send some debug output or runtime logs to my comp and record it?, you know much work I did to show just 2 digits in hardware.
  • processing power - not sure how useful it is but if I need I can make use of the huge processing power a comp has.
  • internet - this is very interesting. maybe my uC doesn't need internet but how about controlling my uC from the internet ?? maybe, controlling my room's lights from office ? office -> internet -> webserver on my home comp -> uC -> light; I think this opens up a lot of new opportunities.
Let's see how it evolves :) I have no idea about how I'm going to utilize, but I'm convinced that given my programming knowledge on computers, it is quite a useful thing to have. Technologies becomes more and more powerful only when they interact.

Now that we are convinced that a computer interface is useful, it is time think about the choices. The possible options are serial port, parallel port, USB, bluetooth, infra-red. Bluetooth and infra-red need hardware counterpart on both sides (on the computer and uC hardware), so that isn't practical for me now. USB seems a option, but that also requires a USB protocol handler on the uC side (there are some free open source USB drivers available for AVR uC's but they would occupy considerable code space -- and with just 8KB available for programming, I would rethink). Serial and parallel ports are the simplest options. Over these two I prefer serial port for two reasons.
  1. Serial port requires fewer number of lines for transmission/reception (technically only 2 lines for data(tx/rx), but additional lines for vcc,gnd etc., add up to 4-5 lines, still much better than parallel port).
  2. ATMega8 has builtin support for handling USART - the standard serial transmission protocol. This makes it handy for us to talk over serial port between the computer and the uC.
So I chose serial port as the preferred communication interface. This does not mean that I can just connect a few wires from serial port of the computer to my uC -- because the logic levels are different between the serial port and the uC. I need a convertor for these logic levels before I can communicate. How? stay tuned.

Monday, March 30, 2009

Digital Thermometer

This is where I was heading to. With the last module, I was ready with 2 digit 7segment LED which could be used to show the current ambient temperature.

The only remaining part is to integrate the temperature sensor into the system, read, decode, and display the reading. I used the LM35 temperature sensor which is quite simple and handy to use (at the size of a transistor). LM35 is a centigrade temperature sensor and has 3 terminals -- VCC, Vout and GND. Connect VCC and GND with a 5V across, and you can calculate the current ambient temperature based on the potential available at Vout. Based on the datasheet of LM35, Vout is set to (0mV + 10 mV/degree). So a 100mV at Vout means a 10 degree centigrade temperature sensed.

Now, the remaining task is to make the micro controller (I use ATMega8) read this reading. uC deals only with digital data. This being an analog data, it has to ideally be fed through a Analog-to-Digital-Convertor (ADC). Incidentially, ATMega8 has an inbuilt ADC (with 6 channels in PDIP package). For the ADC to decode the analog data properly, the ARef (Pin 21 in PDIP) terminal has to be set to a reference voltage. To give an example, if the reference voltage is 5V, one unit in a 10bit ADC is defined as (5/1024) volt ie., ~5mV. So for every 5mV from the analog input (in our case, LM35), the reading from ADC goes up by 1 unit.

In my case, the ARef is set to 4.85V. Hence one ADC unit is (4.85/1024) volt ie., 4.736 mV. As discussed earlier LM35 outputs 10mV per degree Centigrade; so my temperature reading is (adc_reading * 4.736 / 10) or (adc_reading * 0.4736) deg. Centigrade.

Hardware:
The hardware part is just connecting the LM35 to my previous module. The output of LM35 is connected to ADC channel-2 (PIN 25 in PDIP) -- as channel 0,1 are shared with PORTC's 0-1 bits which I have been using as control bits for selecting the 7segment digit in TDM mode.

Software:
After enabling ADC channel-2, the ADC's current value is read and the temperature is calculated using the above derived formula. The value is stored in a global volatile variable which is displayed in the 7segs as in my previous module. The temperature is read every 2 seconds (just arbitrary).

Here is the code:

// Author : Gerald Naveen A (ageraldnaveen at gmail dot com)

#include <avr/io.h>
#include <avr/interrupt.h>

#define F_CPU 1000000

#include <util/delay.h>

static volatile uint16_t g_temp_c = 99;

// insert TDM based seven segment code here... interrupt handling etc.,
// didn't want to bloat the codespace while publishing.

void initialize_adc()
{
ADMUX = (1 << REFS0);
ADCSRA = (1 << ADEN) | 7; // enable && prescaler /128
}

uint16_t read_adc_channel(unsigned int ch)
{
uint16_t result;

ADMUX |= (ch & 0x07); // enable ADC channel 7

ADCSRA |= (1 << ADSC); // start conversion

while(!(ADCSRA & (1 << ADIF))); // wait for conversion to complete

result = ADC; // read the result

ADCSRA |= (1 << ADIF); // signal done to ADC

return result;
}

int main()
{
sei();

initialize_adc();
// initialize timer etc., as my previous module

while(1) {
// reading from channel 2
uint16_t val = (uint16_t) read_adc_channel(2) * 0.4736;
if(val < 100) // just to avoid noise, have a upper limit (100 too big?)
g_temp_c = val; // send it for display

_delay_ms(2000);
}
return 0;
}
Here is a snapshot of my setup showing the temperature inside my refrigerator :) I could not shoot a meaningful video, as the project shows an almost constant number. The temperature was actually showing 8 degrees when I opened the fridge after putting my project inside for around 5 minutes; when I opened the door and while I was trying to place the breadboard upright for the digits to be visible and clicked, the temperature had shot up by a few degrees due to the door being open :D