- 🔭 I’m interested in LLM, Storage, Database, OS and Mathematics
- 🌱 I’m currently working on LLM inference optimization at an AI startup
🎯
Focusing
LLM inference optimization,Previously at Aliyun | SenseTime
-
HUST
- Hangzhou
- https://mrxhub.me/
Pinned Loading
-
vllm-project/vllm
vllm-project/vllm PublicA high-throughput and memory-efficient inference and serving engine for LLMs
Something went wrong, please refresh the page to try again.
If the problem persists, check the GitHub status page or contact support.
If the problem persists, check the GitHub status page or contact support.