Oscilloscope98

intel-analytics/ipex-llm

Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Baichuan, Mixtral, Gemma, etc.) on Intel CPU and GPU (e.g., local PC with iGPU, discrete GPU such as Arc, Flex and Max). A PyTorch LLM library that seamlessly integrates with llama.cpp, HuggingFace, LangChain, LlamaIndex, DeepSpeed, vLLM, FastChat, ModelScope, etc.

Stars总数5431

Forks总数1168

今日Stars265

源码分类

更新时间(1年前)

扫码关注公众号获取最新文章,并可免费领取前端工程师必备学习资源

 
45querys in 0.329 seconds.