johnsfine wrote:mtakahar wrote:
I'm very interested in hearing how it goes on your 2 cores x 2 HT system. Does it also have 3.3V parallel port?
2 cores (per cpu chip) is something else. I don't have that. I have two complete CPUs on one motherboard and each supports HT. I never measured the printer port voltage.
I didn't necessarily mean "dual-core" or "multi-core" CPUs, but just meant 2 real CPU cores total in the system. However, I understand you thought that way, because Intel is talking more and more about these new dual/multi-core chips these days.
I figured out how to compile a .java file and used Winzip to substitute the resulting .class files into the .jar file. I know that isn't the right way and maybe at some point I'll experiment further to get the right way to work (without changing any environment or registry settings that would disturb the obsolete Microsoft Java, obsolete MSVC, and obsolete cygwin all of which have to remain installed and working).
Since you have a cygwin environment, you may already have GNU make. If you do, the included Makefile should work in the most part, and you won't need gcc unless you want to rebuild the native interface code written in C.
Try running the GNU make after commenting out the following lines:
Code: Select all
#CaptureIR.dll: ${JNIOBJS} JNIUtil.o
# gcc -O3 -Wall -D_JNI_IMPLEMENTATION_ -Wl,--kill-at \
# -I${JDKPATH}/include -I${JDKPATH}/include/win32 -I. \
# -shared -o $@ $^
I can put it in the conditional make rules (ifeq(...)) so you only need to edit one line (or specify that in the make command line) to indicate you don't want to rebuild CaptureIR.dll in the next distribution if you think it's useful.
Anyway, I changed minGapLen to 100 rather than the computed value it had.
Without that change Prototype 4 was even worse than prototype 3. It seems to throw whole captures away when there are a lot of gaps around 400 uS, so I get to see nothing and wouldn't have a guess what was wrong (without having seen the issue in Prototype 3).
With that tiny change I'm capturing those signals and decoding them without those problems.
Which type(s) of signals are you testing with? Perhaps I'll include them in my test file and see how it goes on my machine.
Also, are the frequencies look better than before on your machine(s)?
Maybe it takes a multi-cpu system to be this free of latency problems, and that 100 uS setting will cause spurious gaps on a less powerful system.
Sounds like that's what's happening...
But it's really good that we can run this on multiple machines different specs and compare the results.
I like your idea of adding some setup choices, at least until we understand the behavior across a range of systems.
I was assuming tweaking demodulation parameter would belong to the advanced options, but it may have to go into the basic ones if this is something many people might have to change. Anyway, we can discuss more on this later.
I have also noticed that the final pulse is lost. That doesn't matter much for a repeating signal but trashes one-time signals. Obviously you don't know the length of the final gap because you cut off the signal before it ends, but you can just plug in a big number as if you knew and include that last burst in decoding.
You mean, the last whole on/off pair in the whole sequence, or just the last off? The cut-off happens either it reaches the buffer limit or the off period exceeds 0.5 sec (or something like that). It may be just a bug if it's not reaching the limit and missing a whole pair.
If it's because of 0.5 sec timeout, then perhaps I should change the default value (this will be another setup option I'm planning to add BTW.)
If it's not because of none of the above (just the final off period cannot be known because there's no following on), then plugging in a pseudo off value makes sense. It'll be cleaner if DecodeIR can fill in the blank in the future, though.
Later I'll see if I can find that in the java code and test my own correction. But for now I need to go elsewhere.
I was going to suggest doing something like this to compare before and after demodulation (un-demodulated bursts go to the stdout), but looks like there's a bug in handling the frequency array while converting the time index data to non-demodulated burst data.
Code: Select all
if (demodulate)
{
rawSignal.computeBursts(); // <<< debug
System.out.println(rawSignal); // <<< debug
rawSignal.demodulate(carrier_detection_allowance,
min_cycles_per_burst);
}
else
rawSignal.computeBursts();
The thread crashes. I'll look into this tonight or tomorrow.
When you are not seeing with a bad capture, it's supposed to skip demodulation and show you captured raw bursts. I think that's what's happening when you thought it was throwing away the data. I was thinking of dumping the errors into a file as Greg is doing by the time of an official release, but perhaps I should do it sooner than that. In the meantime, you can put a "pause" in the .bat file to see any errors printed to the stderr/stdout.
Hal