CaptureIR: how to use data from digitrace

Forum for the discussion of JP1 Interfaces, hardware hacks, etc.

Moderator: Moderators

johnsfine
Site Admin
Posts: 4766
Joined: Sun Aug 10, 2003 5:00 pm
Location: Bedford, MA
Contact:

CaptureIr source code?

Post by johnsfine »

I'm using prototype 3 from Aug 9, but its included source code is from Aug 6 and earlier.

I tried reading through the demodulate method to understand how it misinterpreted those 300 uS gaps. But the demodulate code makes no sense. I can't imagine how it would ever do any sort of demodulation. Certainly not the demodulation in prototypes 2 or 3.

Does the enclosed source code match some version that worked at all?

Can I get a look at the current source code?
mtakahar
Expert
Posts: 281
Joined: Sun Aug 03, 2003 2:46 pm

Post by mtakahar »

I'll post the latest one with the latest source code once I get home.

Hal
mtakahar
Expert
Posts: 281
Joined: Sun Aug 03, 2003 2:46 pm

Re: Yet another CaptureIr problem

Post by mtakahar »

johnsfine wrote:Exception in thread "AWT-EventQueue-0" java.lang.NullPointerException
at CaptureIR$IRSignalTableModel.getColumnClass(CaptureIR.java:452)
at TableSorter.getColumnClass(TableSorter.java:266)
at javax.swing.JTable.getColumnClass(Unknown Source)
at javax.swing.JTable.getCellRenderer(Unknown Source)
at javax.swing.plaf.basic.BasicTableUI.paintCell(Unknown Source)
at javax.swing.plaf.basic.BasicTableUI.paintCells(Unknown Source)
at javax.swing.plaf.basic.BasicTableUI.paint(Unknown Source)
at javax.swing.plaf.ComponentUI.update(Unknown Source)
at javax.swing.JComponent.paintComponent(Unknown Source)
at javax.swing.JComponent.paint(Unknown Source)
at javax.swing.JComponent.paintWithOffscreenBuffer(Unknown Source)
at javax.swing.JComponent.paintDoubleBuffered(Unknown Source)
at javax.swing.JComponent._paintImmediately(Unknown Source)
at javax.swing.JComponent.paintImmediately(Unknown Source)
at javax.swing.RepaintManager.paintDirtyRegions(Unknown Source)
at javax.swing.SystemEventQueueUtilities$ComponentWorkRequest.run(Unknown Source)
at java.awt.event.InvocationEvent.dispatch(Unknown Source)
at java.awt.EventQueue.dispatchEvent(Unknown Source)
at java.awt.EventDispatchThread.pumpOneEventForHierarchy(Unknown Source)
at java.awt.EventDispatchThread.pumpEventsForHierarchy(Unknown Source)
at java.awt.EventDispatchThread.pumpEvents(Unknown Source)
at java.awt.EventDispatchThread.pumpEvents(Unknown Source)
at java.awt.EventDispatchThread.run(Unknown Source)

What is all that telling me?
This means the callback that returns a table cell value is crashing, perhaps there's a field in an IRSignal class instance that is not set correctly, or a multi-threading issue (race condition.)


I've just taken a look a the source code vs. compiled one. I think the source code is for prototype 3. The source files have older timestamps just because my Makefile rebuilt the stuff just before creating the .jar file.

So, whether it makes to you or not, the demodulation code you are seeing is the one that is actually used in the prototype 3.

I'll post the current one later anyway.


It should be pretty easy to make the JNI code work with MSVC, but I guess you mentioned that yours is too old and couldn't compile the porttalk code I'm using. You could still try recompiling them with USE_PORTTALK undefined (plus a few changes if necessary) and use allowio externally.


Hal
mtakahar
Expert
Posts: 281
Joined: Sun Aug 03, 2003 2:46 pm

Post by mtakahar »

johnsfine wrote:I also read the IR sensor documentation from Fairchild. It seems to say the model we use has open collector output. How is that reliable without a pull up resistor? It also seems to say supply voltage is 4 volts minimum. Hom much of my problems are from the 3 volt supply?
You might want to ask Tommy or open another thread so some other hardware experts can notice it.

It shouldn't be much problem in my case because the parallel port in my laptop gives 4.8V, though, I often see 200 - 400 micro sec continuous ON times recoded (without demodulation.) Could be because my laptop is a single processor w/o HT support, some bugs in the timing measuring code, or maybe something else.

BTW, I tried some 455KHz signals (my Sony car stereo remote and one Jon created) with Digitrace right after building the probe then with CaptureIR later. It actually can't see that short pulses. The pluses looked like pretty much the same as what I got with an UEI learning remote. On this one, I am suspecting IR sensor and/or parallel port response time being too slow.

johnsfine wrote:But there is so much (mingw) stuff to install and so much risk of messing up my current build environment, I've been scared to.
Is it that much? I thought I just needed to download two or three archives including the GNU make for standalone MinGW. Cygwin version needs a lot more, though.

Hal
mtakahar
Expert
Posts: 281
Joined: Sun Aug 03, 2003 2:46 pm

Post by mtakahar »

mtakahar wrote:I'll post the latest one with the latest source code once I get home.
Here it is.
johnsfine
Site Admin
Posts: 4766
Joined: Sun Aug 10, 2003 5:00 pm
Location: Bedford, MA
Contact:

Post by johnsfine »

mtakahar wrote:
mtakahar wrote:I'll post the latest one with the latest source code once I get home.
Here it is.
Thanks. At first glance that source code seems to explain the problem I described earlier. Note the section of demodulate that says:

if (onLen + offLen <= maxCarrierLen)
{
carrierLenSum += (onLen + offLen);
++carrierCycles;
}
else if (offLen >= minGapLen)

You do one thing if you're looking at a valid carrier cycle, another if you're looking at a valid gap, but in the third case you do nothing.

Is the third case supposed to be a latency or other capture glitch? Meaning the code has deduced that capture missed something, so you include the time within the modulated pulse, but don't include it in the frequency calculation?

Maybe I just need to reduce minGapLen to something that is a better fit for actual protocols.
Tommy Tyler
Expert
Posts: 411
Joined: Sun Sep 21, 2003 11:48 am
Location: Denver mountains

Post by Tommy Tyler »

Hal and John,

Maybe I can be of some help.
johnsfine wrote:I also read the IR sensor documentation from Fairchild. It seems to say the model we use has open collector output. How is that reliable without a pull up resistor?
Most parallel port pins have about 4.7K internal pull-ups, which should be adequate.
johnsfine wrote:It also seems to say supply voltage is 4 volts minimum. Hom much of my problems are from the 3 volt supply?
My spec sheet actually says 4.5 volts minimum. During development, all my tests indicated that the Fairchild spec is ultra-conservative. I just now repeated the test and got perfect response at 2.5 volts using a 40kHz remote in a normally lit room from a distance of two feet. I think part of the concern is that the internal voltage regulator in the detector may fall out of regulation if the input voltage is too low, but that shouldn't be a problem in this application.

But your concern is valid. I suggest some sort of test to tell you whether the probe is capturing fast pulses reliably. One of the quickest and easiest that comes to mind is to look at the 40kHz lead-in burst of a Sony TV0000 protocol. Are all the pulses in that burst clean and more-or-less even? Missing pulses ought to jump right out at you. That's also the time to determine what distance and orientaion of the probe seems to provide the best results.
mtakahar wrote:I often see 200 - 400 micro sec continuous ON times recoded (without demodulation.) Could be because my laptop is a single processor w/o HT support, some bugs in the timing measuring code, or maybe something else. I tried some 455KHz signals (my Sony car stereo remote and one Jon created) with Digitrace right after building the probe then with CaptureIR later. It actually can't see that short pulses. The pulses looked like pretty much the same as what I got with an UEI learning remote. On this one, I am suspecting IR sensor and/or parallel port response time being too slow.
Probably not the sensor. It should be able to detect a MHz or so, since its rise and fall times are 1/10us or less. But I am curious as to how you connected the 455kHz signal to the parallel port, because those signals aren't usually TTL voltage levels.

In the original DigiTrace the fastest sampling rate was displayed in the Settings window as "Granularity", and in my setups it was running 1.26us to 1.54us. Since a 455kHz signal is about 1us ON and 1us OFF, there just aren't enough samples. Do you guys have any idea what kind of sampling rate you're getting? Obviously there's no way to record 200-400us of continuous ON time if you're sampling at 1-2us per sample.

Incidentally, to evaluate the timing accuracy of DigiTrace I used a 100kHz quartz crystal controlled square wave.

I'll confess that I'm not exactly sure what you guys are doing, but it sounds like you're replacing DigiTrace with your own software. As I read comments about other concurrent tasks and Hyperthreading I'm wondering if the same approach is being taken. The author of the original DigiTrace described his program as using "non-interrupted burst acquisition". The way everything is locked up while waiting for samples, I interpreted that to mean that nearly all Windows interrupts were disabled, which was the secret of collecting data so fast without dropouts. I probably just don't understand everything I know about this.

Anyway, let me know if I can help.

Tommy
mtakahar
Expert
Posts: 281
Joined: Sun Aug 03, 2003 2:46 pm

Post by mtakahar »

johnsfine wrote:Is the third case supposed to be a latency or other capture glitch? Meaning the code has deduced that capture missed something, so you include the time within the modulated pulse, but don't include it in the frequency calculation?
Yes, you are right. I'll put some comments there.
In the beginning, I started with just ignoring short off-periods, but I switched to focusing on gaps shortly because the first approach didn't work in reality because of those anomalies.

I'm very interested in hearing how it goes on your 2 cores x 2 HT system. Does it also have 3.3V parallel port?

Hal
mtakahar
Expert
Posts: 281
Joined: Sun Aug 03, 2003 2:46 pm

Post by mtakahar »

Wow, welcome back, Tommy.
Tommy Tyler wrote:In the original DigiTrace the fastest sampling rate was displayed in the Settings window as "Granularity", and in my setups it was running 1.26us to 1.54us. Since a 455kHz signal is about 1us ON and 1us OFF, there just aren't enough samples. Do you guys have any idea what kind of sampling rate you're getting? Obviously there's no way to record 200-400us of continuous ON time if you're sampling at 1-2us per sample.
On my PC, digitrace says "granularity" is 1.42 us. It's understandable it can't capture all the on's and off's, and what we see is a composite wave that consists of the IR signal, the interval timer frequency plus possibly something else.

In case of my program, the Windows' high resolution interval timer (aka performance counter) frequency is 3.5MHz, which is 0.28 us per cycle (I hope this calculation is correct.) I'm not touching any system interrupts, and this explains the pulse length anomaly I'm seeing with regular IR signals, though, I don't expect overall system wide interrupts to happen so frequently as to influence it consistently.

I'll read stuff on the digitrace site carefully and try to find out if they have information about how they determine the sampling resolution. System interrupts should be a lot problematic on a SMP and/or HT system, but at the same time, I want to avoid the "freezing" digitrace causes.

Thanks for the pointers.


Hal
Tommy Tyler
Expert
Posts: 411
Joined: Sun Sep 21, 2003 11:48 am
Location: Denver mountains

Post by Tommy Tyler »

mtakahar wrote:I'll read stuff on the digitrace site carefully and try to find out if they have information about how they determine the sampling resolution.
I don't think you'll find much info there. For what it's worth, here's my own theory:

When you launch DigiTrace the screen initially shows the JWA logo with the "Length" always shown as 66.67us. Then after four or five seconds the logo disappears and the Length changes to something like 142.30us. Without even dropping down the Sample Settings window to look see, you know that the Granularity is 1.42us, because that opening display is always 10 samples wide. Without a doubt, a calibration process is going on during the time the logo is on screen. I think the program reads CPU time, goes through the actual steps of collecting a large number of samples (possible as many as several hundred thousand, judging from the time involved), reads CPU time again, and divides the elapsed time by the number of samples to get "granularity" (sample time). There is evidence that the sample time used internally is carried to more decimal places than just two, and that it is just rounded off to the nearest hundredth of a us in the given Granularity. I won't bore you with the mathmatics of this. But you may note that each time you click on zoom minus to double the length, you reach points where it no longer is exactly a multiple of "2" times granularity, even though the number of samples is doubled.

At various times I have seen rogue granularities pop up, and I always felt intuitively it was because something was going on in the background that interrupted the calibration process. Here's one way you can demonstrate that. If you are set up to launch DigiTrace by double-clicking an icon on the desktop, locate that icon so it's not covered by the opening DigiTrace screen. Double-click the icon to launch DigiTrace, and just keep double-clicking on the icon once every second for four or five additional launches. Each double-click will launch another session, and the screens will lie directly on top of each other. Now blow them away one at a time, and look at the various starting Lengths on each. Notice how they vary, and much longer the Lengths are than when you just launch one session. I think that indicates Windows' launching of subsequent sessions is taking processor time away from the calibration sequences.

If you try to mess up the calibration process by holding down a mouse button or keyboard key, it has no effect. That suggests to me those interrupts have been disabled. So if you expect time measurements to be meaningful, you should try to make the calibration process duplicate the actual sampling process that will take place when collecting data.
mtakahar wrote:System interrupts should be a lot problematic on a SMP and/or HT system, but at the same time, I want to avoid the "freezing" digitrace causes.
DigiTrace only "freezes" if there is no data, which seems to me like the compromise you have to make for "uninterrupted" burst mode.

Tommy
mtakahar
Expert
Posts: 281
Joined: Sun Aug 03, 2003 2:46 pm

Post by mtakahar »

Tommy Tyler wrote:
mtakahar wrote:I'll read stuff on the digitrace site carefully and try to find out if they have information about how they determine the sampling resolution.
I don't think you'll find much info there.
It wasn't so bad. These brief descriptions on their site are good enough for me to guess.
Run on Win95 and Win98 and ME using non-interrupted burst acquisition.
Run on Win2000 NT XP with interrupted acquisition, using the allowio driver.
I think they are just doing cli/sti in their driver, and it won't work on NT variants as the device drivers are running in the mode with higher resource protections.
08062005: If you have problems getting it to run with WinXP, as many seem to have, please try entering the following key to your registry: [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Parport\Parameters] "DisableWarmPoll"=dword:00000001

You can revert to the original setting by entering: [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Parport\Parameters] "DisableWarmPoll"=dword:00000000
OS/driver polling was another thing I was suspecting as a cause of irregular pulses. It doesn't seem to be making a noticeable difference to me, but I'm fairly sure it would to someone else.
Tommy Tyler wrote:I think the program reads CPU time, goes through the actual steps of collecting a large number of samples (possible as many as several hundred thousand, judging from the time involved), reads CPU time again, and divides the elapsed time by the number of samples to get "granularity" (sample time).
I think you are right that there's some sort of calibration process going on, though, 1.xx usec still sounds too coarse than it could be achieved if they are just measuring seed of the tight loop with some I/O port monitoring and some other boo keeping stuff. (My PC has a 1.3 GHz Celeron, the OS is XP SP2)

Edit: I found a problem in my loop speed measuring code. After the fix, typical average speed is only 0.25 times per uSec. This is ok up to around 100KHz but not much more. 455KHz is out of the question. The most time consuming part is call to the windows performance counter.

The fluctuations I saw in this test indicates that the capturing thread priority may not be seasonably high enough on my machine, and this would explain the irregular pulses I'm seeing.
Edit: I verified this as well. I'll post more details later.
Tommy Tyler wrote:DigiTrace only "freezes" if there is no data, which seems to me like the compromise you have to make for "uninterrupted" burst mode.
Tommy Tyler wrote:I'll confess that I'm not exactly sure what you guys are doing, but it sounds like you're replacing DigiTrace with your own software.
"Replacing Digitrace" is not exactly what I am trying to do. From JP1 users' point of view, it's more like "replacing learning remotes." With your IR probe, my program: CaptureIR and John's DecodeIR.DLL, you no longer need a UEI learning remote to create new device upgrades from your the original remotes for your new equipments. Further more, it'll be less cumbersome. You don't have to go through several rounds of "learn as many buttons as possible", "download", "copy & paste" cycles to deal with many buttons. Just point then shoot almost as many times as you need.

In my program, I don't want it to get stuck or lock up the entire system just because the user stopped pressing the button. The "uninterrupted" mode doesn't sound feasible on XP anyway, though.


Hal
Last edited by mtakahar on Sun Aug 14, 2005 9:30 am, edited 1 time in total.
johnsfine
Site Admin
Posts: 4766
Joined: Sun Aug 10, 2003 5:00 pm
Location: Bedford, MA
Contact:

Post by johnsfine »

mtakahar wrote: I'm very interested in hearing how it goes on your 2 cores x 2 HT system. Does it also have 3.3V parallel port?
2 cores (per cpu chip) is something else. I don't have that. I have two complete CPUs on one motherboard and each supports HT. I never measured the printer port voltage.

I was already describing the results of that system.

I figured out how to compile a .java file and used Winzip to substitute the resulting .class files into the .jar file. I know that isn't the right way and maybe at some point I'll experiment further to get the right way to work (without changing any environment or registry settings that would disturb the obsolete Microsoft Java, obsolete MSVC, and obsolete cygwin all of which have to remain installed and working).

Anyway, I changed minGapLen to 100 rather than the computed value it had.

Without that change Prototype 4 was even worse than prototype 3. It seems to throw whole captures away when there are a lot of gaps around 400 uS, so I get to see nothing and wouldn't have a guess what was wrong (without having seen the issue in Prototype 3).

With that tiny change I'm capturing those signals and decoding them without those problems.

Maybe it takes a multi-cpu system to be this free of latency problems, and that 100 uS setting will cause spurious gaps on a less powerful system. I like your idea of adding some setup choices, at least until we understand the behavior across a range of systems.

I have also noticed that the final pulse is lost. That doesn't matter much for a repeating signal but trashes one-time signals. Obviously you don't know the length of the final gap because you cut off the signal before it ends, but you can just plug in a big number as if you knew and include that last burst in decoding. Later I'll see if I can find that in the java code and test my own correction. But for now I need to go elsewhere.
mtakahar
Expert
Posts: 281
Joined: Sun Aug 03, 2003 2:46 pm

Post by mtakahar »

johnsfine wrote:
mtakahar wrote: I'm very interested in hearing how it goes on your 2 cores x 2 HT system. Does it also have 3.3V parallel port?
2 cores (per cpu chip) is something else. I don't have that. I have two complete CPUs on one motherboard and each supports HT. I never measured the printer port voltage.
I didn't necessarily mean "dual-core" or "multi-core" CPUs, but just meant 2 real CPU cores total in the system. However, I understand you thought that way, because Intel is talking more and more about these new dual/multi-core chips these days.
I figured out how to compile a .java file and used Winzip to substitute the resulting .class files into the .jar file. I know that isn't the right way and maybe at some point I'll experiment further to get the right way to work (without changing any environment or registry settings that would disturb the obsolete Microsoft Java, obsolete MSVC, and obsolete cygwin all of which have to remain installed and working).
Since you have a cygwin environment, you may already have GNU make. If you do, the included Makefile should work in the most part, and you won't need gcc unless you want to rebuild the native interface code written in C.
Try running the GNU make after commenting out the following lines:

Code: Select all

#CaptureIR.dll: ${JNIOBJS} JNIUtil.o
#	gcc -O3 -Wall -D_JNI_IMPLEMENTATION_ -Wl,--kill-at \
#	-I${JDKPATH}/include -I${JDKPATH}/include/win32 -I. \
#	-shared -o $@ $^
I can put it in the conditional make rules (ifeq(...)) so you only need to edit one line (or specify that in the make command line) to indicate you don't want to rebuild CaptureIR.dll in the next distribution if you think it's useful.
Anyway, I changed minGapLen to 100 rather than the computed value it had.
Without that change Prototype 4 was even worse than prototype 3. It seems to throw whole captures away when there are a lot of gaps around 400 uS, so I get to see nothing and wouldn't have a guess what was wrong (without having seen the issue in Prototype 3).

With that tiny change I'm capturing those signals and decoding them without those problems.
Which type(s) of signals are you testing with? Perhaps I'll include them in my test file and see how it goes on my machine.

Also, are the frequencies look better than before on your machine(s)?
Maybe it takes a multi-cpu system to be this free of latency problems, and that 100 uS setting will cause spurious gaps on a less powerful system.
Sounds like that's what's happening...
But it's really good that we can run this on multiple machines different specs and compare the results.
I like your idea of adding some setup choices, at least until we understand the behavior across a range of systems.
I was assuming tweaking demodulation parameter would belong to the advanced options, but it may have to go into the basic ones if this is something many people might have to change. Anyway, we can discuss more on this later.
I have also noticed that the final pulse is lost. That doesn't matter much for a repeating signal but trashes one-time signals. Obviously you don't know the length of the final gap because you cut off the signal before it ends, but you can just plug in a big number as if you knew and include that last burst in decoding.
You mean, the last whole on/off pair in the whole sequence, or just the last off? The cut-off happens either it reaches the buffer limit or the off period exceeds 0.5 sec (or something like that). It may be just a bug if it's not reaching the limit and missing a whole pair.

If it's because of 0.5 sec timeout, then perhaps I should change the default value (this will be another setup option I'm planning to add BTW.)

If it's not because of none of the above (just the final off period cannot be known because there's no following on), then plugging in a pseudo off value makes sense. It'll be cleaner if DecodeIR can fill in the blank in the future, though.
Later I'll see if I can find that in the java code and test my own correction. But for now I need to go elsewhere.
I was going to suggest doing something like this to compare before and after demodulation (un-demodulated bursts go to the stdout), but looks like there's a bug in handling the frequency array while converting the time index data to non-demodulated burst data.

Code: Select all

                      if (demodulate)
                        {
                          rawSignal.computeBursts();     // <<< debug
                          System.out.println(rawSignal); // <<< debug
                          rawSignal.demodulate(carrier_detection_allowance,
                                             min_cycles_per_burst);
                        }
                      else
                        rawSignal.computeBursts();
The thread crashes. I'll look into this tonight or tomorrow.

When you are not seeing with a bad capture, it's supposed to skip demodulation and show you captured raw bursts. I think that's what's happening when you thought it was throwing away the data. I was thinking of dumping the errors into a file as Greg is doing by the time of an official release, but perhaps I should do it sooner than that. In the meantime, you can put a "pause" in the .bat file to see any errors printed to the stderr/stdout.

Hal
mtakahar
Expert
Posts: 281
Joined: Sun Aug 03, 2003 2:46 pm

Post by mtakahar »

BTW, Tommy, do you think it's easy to modify IRDC to support ~450KHz signals?
Please don't bother seriously trying to design it, at least yet. Just a thought.

Hal
johnsfine
Site Admin
Posts: 4766
Joined: Sun Aug 10, 2003 5:00 pm
Location: Bedford, MA
Contact:

Post by johnsfine »

mtakahar wrote: Which type(s) of signals are you testing with? Perhaps I'll include them in my test file and see how it goes on my machine.
I made an upgrade in RM using pid_0003. That was just one of several tests I did, but it was the one which most drastically failed before I changed minGapLen. Note that pid_0003 is also repeating or non repeating based on the low bit of the hex cmd, so it also has the last-burst issues.
mtakahar wrote: Also, are the frequencies look better than before on your machine(s)?


That improved, I forget which prototype number, when you said it would. It still isn't as good as a JP1 learning remote at getting the frequency right, but much better than at first. We probably can improve it further later, but it's not as serious as other issues.
Maybe it takes a multi-cpu system to be this free of latency problems, and that 100 uS setting will cause spurious gaps on a less powerful system.
Sounds like that's what's happening...

That was just speculation on my part. I haven't tried the 100 uS setting on a single CPU system so I don't know it would have latency problems.
You mean, the last whole on/off pair in the whole sequence, or just the last off?
The last whole burst pair was getting lost.

I haven't figured out how much if any is lost before the demodulation. I kludged a change in demodulation to get most of the On part of the last burst pair.

In raw cycles you ought to have the last On followed by an unknown Off (you know the unknown Off is at least .5 seconds if it was cutoff by time, but you don't know more than that).

Each raw half cycle is represented by the timestamp of the beginning of the next half cycle. Since the half cycle after cutoff never begins, I assume nothing is stored.

Demodulation then loses much more: Many raw cycles go into the On part of the last burst. Those are all accumulated by the demodulation, but the loop then ends without a minGapLen sized gap so the accumulated last demodulated On is lost. I kludged a change into that code to get that out.

I don't understand the minus one on the loop control. I think even with my kludge we are still short two full raw cycles: The big half of a raw cycle at the end missing for the reason I described above, plus three more half raw cycles missing, maybe just because of that loop control (I can't follow the relationship among the modules well enough to be sure).
(just the final off period cannot be known because there's no following on), then plugging in a pseudo off value makes sense. It'll be cleaner if DecodeIR can fill in the blank in the future, though.
DecodeIr.dll does extend the last off time whenever there is no repeat part (this raw capture has no repeat part). And it takes only full bursts. So providing the On half of the last burst with anything as its Off half is enough for DecodeIr to be happy. It just may confuse the user and/or other programs if the timing display has a short off at the end where it was really something over .5 seconds.
Post Reply