Phi-3.5-Mini-Instruct快速上手:CLI命令行模式调用与API服务封装方法
Phi-3.5-Mini-Instruct快速上手CLI命令行模式调用与API服务封装方法1. 项目简介Phi-3.5-Mini-Instruct是微软推出的轻量级旗舰小模型具备出色的逻辑推理、代码生成和问答能力。本文将带您快速掌握如何在命令行模式下调用该模型以及如何将其封装为API服务实现更灵活的部署方式。2. 环境准备2.1 硬件要求显卡NVIDIA显卡显存≥8GB推荐RTX 3060及以上内存16GB及以上存储至少10GB可用空间2.2 软件依赖pip install torch transformers fastapi uvicorn3. 基础命令行调用3.1 模型加载与初始化from transformers import pipeline # 初始化对话管道 chat_pipe pipeline( text-generation, modelmicrosoft/Phi-3.5-Mini-Instruct, torch_dtypeauto, device_mapauto )3.2 单次对话示例response chat_pipe( 请用Python实现快速排序算法, max_new_tokens512, temperature0.7 ) print(response[0][generated_text])3.3 多轮对话实现# 对话历史管理 conversation [] def chat(message): global conversation conversation.append({role: user, content: message}) response chat_pipe( conversation, max_new_tokens1024, do_sampleTrue ) assistant_reply response[0][generated_text][-1][content] conversation.append({role: assistant, content: assistant_reply}) return assistant_reply4. API服务封装4.1 FastAPI基础服务from fastapi import FastAPI from pydantic import BaseModel app FastAPI() class ChatRequest(BaseModel): message: str max_tokens: int 1024 temperature: float 0.7 app.post(/chat) async def chat_endpoint(request: ChatRequest): response chat_pipe( request.message, max_new_tokensrequest.max_tokens, temperaturerequest.temperature ) return {response: response[0][generated_text]}4.2 启动API服务uvicorn main:app --host 0.0.0.0 --port 80004.3 带对话历史的API实现from fastapi import FastAPI, HTTPException from pydantic import BaseModel from uuid import uuid4 app FastAPI() sessions {} class SessionRequest(BaseModel): message: str session_id: str None max_tokens: int 1024 temperature: float 0.7 app.post(/chat) async def chat_with_history(request: SessionRequest): if not request.session_id: request.session_id str(uuid4()) sessions[request.session_id] [] conversation sessions[request.session_id] conversation.append({role: user, content: request.message}) try: response chat_pipe( conversation, max_new_tokensrequest.max_tokens, temperaturerequest.temperature ) assistant_reply response[0][generated_text][-1][content] conversation.append({role: assistant, content: assistant_reply}) return { response: assistant_reply, session_id: request.session_id } except Exception as e: raise HTTPException(status_code500, detailstr(e))5. 高级配置与优化5.1 性能优化参数# 优化后的管道配置 chat_pipe pipeline( text-generation, modelmicrosoft/Phi-3.5-Mini-Instruct, torch_dtypetorch.bfloat16, device_mapauto, model_kwargs{ load_in_4bit: True, # 4位量化 bnb_4bit_compute_dtype: torch.bfloat16, bnb_4bit_use_double_quant: True } )5.2 流式响应实现from fastapi import Response from fastapi.responses import StreamingResponse app.post(/stream_chat) async def stream_chat(request: ChatRequest): def generate(): for chunk in chat_pipe( request.message, max_new_tokensrequest.max_tokens, temperaturerequest.temperature, streamTrue ): yield chunk[0][generated_text] return StreamingResponse(generate(), media_typetext/plain)6. 总结本文详细介绍了Phi-3.5-Mini-Instruct模型的命令行调用方法和API服务封装技术。通过这两种方式您可以灵活地将模型集成到各种应用中命令行模式适合快速测试和脚本调用API服务便于与其他系统集成高级优化可显著提升推理效率流式响应改善用户体验建议根据实际需求选择合适的部署方式对于生产环境推荐使用带对话历史的API实现配合性能优化参数。获取更多AI镜像想探索更多AI镜像和应用场景访问 CSDN星图镜像广场提供丰富的预置镜像覆盖大模型推理、图像生成、视频生成、模型微调等多个领域支持一键部署。