The system that I have built uses two external interrupts to grab the output of an ADC with a MOD5282.
I have a slow (~1Hz) signal hooked up to Int3 and use a semiphore to detect when this is in the "grab the data" logic state. I then perform an OSSemPend on Int3 (to wait for it return to its "I'm done" state) and enable Int1. During this time, the Int1 ISR grabs the data.
A relatively high speed clock, up to 80 kHz, (an output of the ADC that says when the conversion is complete) is wired to Int1. The Int1 ISR reads in two 16 bit values. The OE (output enable) is used to latch the two 16 bit values from the ADC.
My Int1 ISR is:
INTERRUPT(out_irq1_pin_isr, 0x2700 )
{
sim.eport.epfr = 0x02;/* Clear the interrupt 1 edge 0 0 0 0 0 0 1 0 */
DataBuffer[data_count++] = *ExtLong; //Reads in 2x 16 bit values through D16-D31, A1 toggles.
}
I monitor the delay between the Int1 falling edge (the trigger) and the OE line using a digital scope. This roughly measures the time needed to grab the data. For the most part, the delay is around 5 usec, within my requirement to sample at an 80 kHz sampling rate. However, sometimes the delay as much a 1 msec!!! (Update, > 90 msec!)
Is there anything I can do to prevent this? During the "grab the data" interval, I'd like to disable everything in the MOD5282 except these two interrupts.
Thanks in advance,
Rich
[5-16-10] Further testing shows lag can be >90 msec between Interrupt edge and execution of the ISR. I disabled Autoupdate, DHCP and TCP ports with no sign of improvement. Arggh.
Critical External Interrupt and response time
Re: Critical External Interrupt and response time
Are you familiar with USER_ENTER_CRITICAL() and USER_EXIT_CRITICAL()?
Re: Critical External Interrupt and response time
I looked at the functions USER_ENTER_CRITICAL() and USER_EXIT_CRITICAL() but the manual indicates that these functions disable all hardware interrupts.
Re: Critical External Interrupt and response time
90ms is waaaaay to high. The only way that could be happening is a higher priority interrupt is poorly written or a critical section has been added in your app. IRQ1 is the lowest priority. What is your other ISR and what is it doing?
If you create an totally separate application and run only this interrupt, what timing do you get?
If you create an totally separate application and run only this interrupt, what timing do you get?
Re: Critical External Interrupt and response time
Interrupts have priority over tasks. You don't need to worry about your TCP thread, the DHCP server etc. One way or another a 1 millisecond delay indicates the Int 1 ISR is not enabled. This can happen when a higher priority interrupt (2-6) has occurred, or when the icr bit has been set to disable the interrupt. How many other interrupts are you using? You can just manually disable them using the Interrupt Control Register, when you start this cycle in the handler for INT 3.
It strikes me as unusual to use Interrupt 1 with a mask of 0x2700. In effect making the lowest priority interrupt, uninterruptible.
It strikes me as unusual to use Interrupt 1 with a mask of 0x2700. In effect making the lowest priority interrupt, uninterruptible.
Re: Critical External Interrupt and response time
What a weekend! It seems that the lag is between 5 and 10 microseconds. The problem seemed to be the oscilloscope setup (high bandwidth and trigger noise) causing a screwy trigger. I bury my head in shame....
I am using only two interrupts, both external. Int1 and Int3. I enable them only when needed using the sim.eport.epier register.
I am using only two interrupts, both external. Int1 and Int3. I enable them only when needed using the sim.eport.epier register.
Re: Critical External Interrupt and response time
Dang hardware, always causing problems for the software!
I assume 5-10 usec is acceptable. That sounds like a reasonable task switching time in my experience.
I assume 5-10 usec is acceptable. That sounds like a reasonable task switching time in my experience.
Re: Critical External Interrupt and response time
The key to drastically improving interrupt latencies is by using SRAM for ALL running tasks. Look at the following application note for more technical details on how to do this:
http://www.netburner.com/downloads/nndk ... rmance.pdf
By default all tasks are in SRAM except the idle task. You will want to add the idle task to SRAM otherwise latency will be greater if an interrupt fires during this task. Also add any other extra system task stacks such as http or ftp. Finally make sure any user tasks are also in SRAM.
I have not benchmarked the interrupt latency in a few months so these values may differ slightly in the newest NNDK. The last time I got results were as follows for the 5282... the results are better for the faster clocked processors such as the 5270/34:
OSTaskSwitch With Semaphore with stacks in SDRAM = 23.04 uS
OSTaskSwitch With Semaphore with stacks in SRAM = 7.32 uS
Interrupt service latency with stacks in SDRAM = 5uS
Interrupt service latency with stacks in SRAM = 2.83uS
Non Maskable Interrupt Latency with 1 register push/pop with stacks in SDRAM = 1.06uS
Non Maskable Interrupt Latency with 1 register push/pop with stacks in SRAM = 0.51uS
Also keep in mind that as you add stacks to SRAM it will leave less space for network buffers in SRAM. This can reduce speed of network throughput. Even with no free space for network buffers in SRAM you will get more than 18 mbit TCP and 27mbit UDP throughput. This bandwidth can nearly triple when SRAM is optimized for network performance but it sounds like interrupt latency is more important to your application.
I posted this previously in another thread but here is the benchmark application I used to get the results I posted above. This zip file includes a txt file with results for all platforms, including network tests.
http://www.netburner.com/downloads/nndk ... rmance.pdf
By default all tasks are in SRAM except the idle task. You will want to add the idle task to SRAM otherwise latency will be greater if an interrupt fires during this task. Also add any other extra system task stacks such as http or ftp. Finally make sure any user tasks are also in SRAM.
I have not benchmarked the interrupt latency in a few months so these values may differ slightly in the newest NNDK. The last time I got results were as follows for the 5282... the results are better for the faster clocked processors such as the 5270/34:
OSTaskSwitch With Semaphore with stacks in SDRAM = 23.04 uS
OSTaskSwitch With Semaphore with stacks in SRAM = 7.32 uS
Interrupt service latency with stacks in SDRAM = 5uS
Interrupt service latency with stacks in SRAM = 2.83uS
Non Maskable Interrupt Latency with 1 register push/pop with stacks in SDRAM = 1.06uS
Non Maskable Interrupt Latency with 1 register push/pop with stacks in SRAM = 0.51uS
Also keep in mind that as you add stacks to SRAM it will leave less space for network buffers in SRAM. This can reduce speed of network throughput. Even with no free space for network buffers in SRAM you will get more than 18 mbit TCP and 27mbit UDP throughput. This bandwidth can nearly triple when SRAM is optimized for network performance but it sounds like interrupt latency is more important to your application.
I posted this previously in another thread but here is the benchmark application I used to get the results I posted above. This zip file includes a txt file with results for all platforms, including network tests.
- Attachments
-
- NB_Benchmark.zip
- Benchmark for task latency, IRQ latency, network bandwidth and memory transfer for all network platforms.
- (26 KiB) Downloaded 447 times
Re: Critical External Interrupt and response time
That (Larry's response) struck me as wiki worthy - so I created a page and linked it into the HowTo page.