(no title)
francescopace | 3 months ago
You're spot on about the MVS approach. It's essentially a sliding window variance of the spatial turbulence (std dev across subcarriers), with adaptive thresholding based on the moving variance of that signal.
If you're interested in the MVS details, I wrote a free Medium article that walks through the segmentation algorithm step-by-step with visualizations. Links are in the README.
Your approach is actually quite similar to what I'm doing, just in a different order:
- My flow: Raw CSI → Segmentation (MVS) → Filters (Butterworth/Wavelet/Hampel/SG) → Feature extraction
- Your flow: Raw CSI → EWMA de-meaning → Dimensionality reduction → Feature extraction
The main difference is that I segment first to separate IDLE from MOTION states (keeping segmentation on raw, unfiltered CSI to preserve motion sensitivity), then only extract features during MOTION (to save CPU cycles).
Thanks for the thoughtful feedback! Always great to exchange notes with someone who's been in the trenches with CSI signal processing
roger_|3 months ago
francescopace|3 months ago
The decision is a binary comparison: When moving_variance > threshold then MOTION state (movement detected) else IDLE state.
The features are extracted only during MOTION segments (to save CPU cycles) and published via MQTT.
They serve as rich foundation data for potential external ML models (e.g., to capture nuances like gestures, running, or falling), but they are absolutely not used for the core segmentation decision itself.