I’ve seen this scene way too many times:
Same OFDM signal. Two engineers standing at the same bench.
One reads -10 dBm, the other insists it’s -16 dBm.
Then the guessing starts—did the DUT change state? Is someone’s instrument “not accurate”? Did someone mess up the setup?
Most of the time the answer is simpler than people want to believe:
You’re not measuring the same kind of power.
Peak vs average vs channel power, RBW, VBW, detector mode, averaging method… if even one or two of these aren’t aligned, your numbers can be “confidently different.”
So instead of arguing whose spectrum analyzer is better, it’s usually more productive to align the measurement definition first.
This post is not a textbook and not a brand comparison.
It’s a practical R&D-debug mindset for answering one question fast:
When power numbers don’t match—what exactly is different about what you measured?
Be brutally clear: if you don’t define “power,” you’re guaranteed to argue
People say “I measured power,” but in practice they often mean different things. At least three common meanings show up in the lab:
1) Peak power (Peak / Marker)
You place a marker on the highest point and read that value.
Great for checking spurs or a tone peak—but it’s not total power.
2) Channel power / integrated power (Channel Power)
You define a bandwidth (20 MHz, 100 MHz, etc.), and the analyzer integrates power across that channel.
This is usually closer to “how big is this signal overall.”
3) Average power (Average / RMS / time-averaged)
This is where confusion explodes. “Average” might mean detector averaging, trace averaging, RMS detector, or something else depending on the instrument.
So I usually start with a boring question that saves a lot of wasted time:
Are we talking about the peak at a point—or the total power in a defined bandwidth?
If that’s not aligned, everything after it is noise.
Why peak often looks “too high” on OFDM signals
With OFDM (and most modern modulated signals), it’s very common for peak readings to look significantly higher than what you expect as “average power.”
That’s not the analyzer lying. It’s the signal.
OFDM envelopes fluctuate in time and occasionally produce higher instantaneous peaks. So:
- Peak is “the highest moment I happened to catch”
- Average/RMS is “the typical power level over time”
If you compare peak-marker readings to an average-power spec, being off by a few dB is normal. Sometimes more.
In R&D work, I’ll often force the team to align on this first:
Are we comparing Peak, Channel Power, or some RMS/average metric?
The real trap: RBW (resolution bandwidth)
Many engineers treat RBW like a “display detail” setting:
“RBW just changes how fine the trace looks, right?”
For a CW tone, that intuition kind of works.
For OFDM, wideband modulation, and noise-like spectra—RBW can absolutely change what you see and what you read.
Here’s a practical way to think about RBW:
RBW is the width of the filter you’re using to ‘look’ at the spectrum. Change the filter width, and the measured noise and energy presentation changes.
Typical symptoms:
- increasing RBW raises the displayed noise floor (more noise bandwidth gets integrated)
- wideband/modulated traces change shape and stability
- marker readings at a given point can shift
So in R&D debugging, if we’re trying to align power numbers, I do something extremely unexciting:
Lock RBW first, then talk about power.
If two engineers use different RBW, they may not even be using the same “ruler.”
VBW, detector, averaging: you think you’re smoothing the trace—often you’re changing the number
This trio causes a lot of “my measurement doesn’t match yours” situations.
VBW (video bandwidth)
Often used to make the trace look smoother. That’s fine—but it also changes how stable the displayed reading is, and how people choose where to read.
One engineer uses a tiny VBW and reads a very “steady” line.
Another uses a wide VBW and sees a noisy trace, then reads a different point.
Same signal, different outcome.
Detector mode (Sample / Peak / Average / RMS, etc.)
Detector choice is basically “how the analyzer turns fast data into the trace you see.”
For OFDM, detector selection can strongly influence what number you read.
Averaging method (linear vs log/dB averaging, trace average behavior)
This is a huge one. Some analyzers average in linear power, some average in dB, and some options are tied to detector settings.
Don’t assume “average is average.” It isn’t.
If you’re aligning measurements, don’t let averaging mode be a hidden default.
If you care about total power, use built-in measurements (Channel Power / OBW / ACPR)
In R&D labs, I rarely recommend “marker + intuition” as a way to talk about OFDM total power.
It’s too easy to get misled by RBW/VBW/detector/averaging and it’s hard to reproduce across people.
If your goal is “total power inside a bandwidth,” use the analyzer’s built-in tools:
- Channel Power
- OBW (Occupied Bandwidth)
- ACPR (Adjacent Channel Power Ratio)
These functions at least do one thing right:
they define the measurement more explicitly and are easier to reproduce across the team.
Still, don’t assume two analyzers will match perfectly if settings differ—but it’s far better than “everyone reads a different marker.”
Don’t forget the input chain: attenuation, preamp, ref level—these can create fake differences
Sometimes the disagreement isn’t “measurement math.” It’s the front end.
Input attenuation (Atten)
Different attenuation changes headroom, noise floor behavior, and whether you’re approaching compression.
Preamp ON/OFF
Preamp changes noise floor and visibility. Depending on signal level and settings, it can also change the stability of what you’re reading.
Reference level & overload/compression
In R&D debugging, one of the worst mistakes is measuring while the front end is near overload or compressing.
You can get numbers that look stable—but they’re not real.
A simple sanity move I use in the lab:
Change attenuation or reference level by a step and see if the reading behaves reasonably.
If a small change makes the result “go weird,” stop blaming the DUT and fix the measurement chain first.
My “minimum alignment checklist” (R&D-debug friendly)
If you want your team to stop arguing, here’s a simple order of operations:
1) Define what power you mean
Peak? Channel power? RMS/average? A standard metric?
2) Lock the key settings (don’t rely on memory)
RBW / VBW
Detector mode
Averaging method (linear vs dB)
Input attenuation / preamp / reference level
3) Prefer built-in measurements for total power
Channel Power / OBW / ACPR instead of “everyone reading a marker.”
4) Do one sanity check to avoid chain illusions
Confirm you’re not near overload/compression.
For wideband/noise-like cases, verify RBW changes behave as expected.
If you do those four things, a lot of “the analyzer is wrong” problems disappear—because you finally defined the measurement and used the same ruler.
The one sentence I don’t love hearing
“This spectrum analyzer isn’t accurate.”
In R&D debugging, what I see far more often is:
the measurement definition wasn’t aligned.
Different detector modes, different RBW/VBW, different averaging—people end up measuring different things and then blaming the instrument.
Next time your numbers don’t match, ask one question first:
Are we comparing Peak or Channel Power?
That single question usually shrinks the argument immediately.
If you want, send me two short lines:
- your signal bandwidth (e.g., 20/100 MHz class)
- whether you’re trying to align peak power or channel power (or a specific standard metric)
I can help you list the settings that must be locked so you stop burning time in the lab.
About me & contact
I work on RF / optical / high-speed test setups and instrument supply chain. I help teams balance performance, budget, and risk—whether you’re buying new gear, used gear, renting, or using lab resources.
Website: https://maronlabs.com
Email: contact@maronlabs.com
Top comments (0)