# Mixed Chunk [Attention](Attention.md)
- an efficient linear approximation method that combines the benefits from partial and linear [Attention](Attention.md) mechanisms, which is accelerator-friendly and highly competitive in quality.
- The method works on chunks of tokens and leverages local (within chunk) and global (between chunks) [Attention](Attention.md) spans