>The current implementation adopts pseudo-spiking, where activations are approximated as spike-like signals at the tensor level, rather than true asynchronous event-driven spiking on neuromorphic hardware.
Isn't that in essence very similar to Quantization Aware Training (QaT)?
>The current implementation adopts pseudo-spiking, where activations are approximated as spike-like signals at the tensor level, rather than true asynchronous event-driven spiking on neuromorphic hardware.
Isn't that in essence very similar to Quantization Aware Training (QaT)?
SpikingBrain Technical Report: Spiking Brain-inspired Large Models https://arxiv.org/abs/2509.05276
https://news.ycombinator.com/item?id=45206420
Well, it would still allow to deploy the trained model to SNN hardware, if it existed.