|
本帖最后由 winpefk 于 2025-4-6 14:23 编辑
这里使用相当数量的废旧手机(参考三星sm note10,小米5这种设备):
要求:GPU vulkan版本至少1.2,不特定型号,在猎户座9820的mali g76显卡上可以分担10层.
设备数量多多益善,落后的设备虽然拖慢计算,但是会增加吞吐量(即最大能运行的模型规模)
这里使用例子: deepseek-r1:14B模型为例子:
首先,你需要从hf镜像上下载模型原始权重,然后量化为INT8/FP16(可选,要不然太慢,你如果V100/H100,当我没说)
注意: adreno gpu/mali只支持 Q8_0,Q4_0,FP16量化,否则无法创建ggml计算管线(精度F32>F16>Q8>Q4),不支持的量化排在Q8到Q4之间.
量化方法:git lfs clone (自己去搜索llama点cpp的仓库地址) --depth 1
自行克隆所需模型地址 (加--depth 1 如果硬盘大小不足,如果很吃紧,建议克隆完毕后删除.git文件夹)
- python -m venv ./model_converter
复制代码
- bash ./model_converter/bin/activate
- <div>pip install transformer torch</div>
复制代码
- python convert_hf_to_gguf.py ./你的模型仓库位置 ./out.gguf --outtype f32
复制代码
(你也可以选择 f16, int8)
模型已经准备完成,编译llama点cpp:
手机:安装termux
- pkg install build-essential vulkan-loader vulkan-headers cmake git binutils -y
复制代码
再次克隆llama点cpp(我无权限发链接)
- cd llama.cpp
- mkdir build
- cd build
- cmake .. -DGGML_VULKAN=1 -DGGML_RPC=1
- make -j$(nproc)
复制代码
- ./bin/rpc-server -p 9999 -H (手机所在局域网ip地址)
复制代码
警告:您需要先用cpuz检查gpu vulkan版本,如果不支持 1.2,把DGGML_VULKAN 这个参数删掉,否则ggml会崩溃
在每一台手机上如法炮制
如果是支持cuda的n卡电脑:-DGGML_CUDA=1,比vulkan快速,稳定
运行模型步骤:
./bin/llama-cli -m 刚刚量化好的模型.gguf --rpc 把刚刚运行rpc-server的设备ip地址记录,加上:9999 --ngl 30(很重要,否则不会自动加速计算)
举例子,192.168.1.1 -> sm-n971n, 192.168.1.2 -> Huawei pad 5,模型是out.gguf
那么最后命令(在执行量化的机器上运行)是: ./bin/llama-cli -m out.gguf --rpc 192.168.1.1:9999,192.168.1.2:9999 --ngl 30 (数字可以自己调整)
建议:性能低的设备在运行./rpc-server时加参数 -m 限制内存/显存大小(比如 数字1024表示1G),这样最后llama会给这些设备少分配一些层
无图无真相,但是我没有图床 =( (省略了一些信息)
- ~/DeepSeek-R1-Distill-Qwen-14B $ ../llama/build/bin/llama-cli -m deepseek_14b.gguf --rpc 192.168.1.1:50052 -ngl 4 --device RPC[192.168.1.1:50052]
- ggml_vulkan: Found 1 Vulkan devices:
- ggml_vulkan: 0 = Adreno (TM) 650 (Qualcomm Technologies Inc. Adreno Vulkan Driver) | uma: 1 | fp16: 1 | warp size: 64 | shared memory: 32768 | matrix cores: none
- build: 52 (af6ae1e) with clang version 19.1.7 for aarch64-unknown-linux-android24
- main: llama backend init
- main: load the model and apply lora adapter, if any
- llama_model_load_from_file_impl: using device RPC[192.168.1.1:50052] (RPC[192.168.1.1:50052]) - 1024 MiB free
- llama_model_loader: loaded meta data with 27 key-value pairs and 579 tensors from deepseek_14b.gguf (version GGUF V3 (latest))
- llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
- llama_model_loader: - kv 0: general.architecture str = qwen2
- llama_model_loader: - kv 1: general.type str = model
- llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Qwen 14B
- llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Qwen
- llama_model_loader: - kv 4: general.size_label str = 14B
- llama_model_loader: - kv 5: general.license str = mit
- llama_model_loader: - kv 6: qwen2.block_count u32 = 48
- llama_model_loader: - kv 7: qwen2.context_length u32 = 131072
- llama_model_loader: - kv 8: qwen2.embedding_length u32 = 5120
- llama_model_loader: - kv 9: qwen2.feed_forward_length u32 = 13824
- llama_model_loader: - kv 10: qwen2.attention.head_count u32 = 40
- llama_model_loader: - kv 11: qwen2.attention.head_count_kv u32 = 8
- llama_model_loader: - kv 12: qwen2.rope.freq_base f32 = 1000000.000000
- llama_model_loader: - type f32: 241 tensors
- llama_model_loader: - type q8_0: 338 tensors
- print_info: file format = GGUF V3 (latest)
- print_info: file type = Q8_0
- print_info: file size = 14.62 GiB (8.50 BPW)
- print_info: general.name = DeepSeek R1 Distill Qwen 14B
- print_info: vocab type = BPE
- print_info: n_vocab = 152064
- print_info: n_merges = 151387
- print_info: BOS token = 151646 '<|begin▁of▁sentence|>'
- print_info: EOS token = 151643 '<|end▁of▁sentence|>'
- print_info: EOT token = 151643 '<|end▁of▁sentence|>'
- p
- load_tensors: loading model tensors, this can take a while... (mmap = true)
- load_tensors: offloading 4 repeating layers to GPU
- load_tensors: offloaded 4/49 layers to GPU
- load_tensors: CPU_Mapped model buffer size = 13852.63 MiB
- load_tensors: RPC[192.168.1.1:50052] model buffer size = 1115.89 MiB
- ............................................................................................
- llama_context: n_ctx_per_seq (4096) < n_ctx_train (131072) -- the full capacity of the model will not be utilized
- llama_context: CPU output buffer size = 0.58 MiB
- init: kv_size = 4096, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 48, can_shift = 1
- init: CPU KV buffer size = 704.00 MiB
- init: RPC[192.168.1.1:50052] KV buffer size = 64.00 MiB
- llama_context: KV self size = 768.00 MiB, K (f16): 384.00 MiB, V (f16): 384.00 MiB
- llama_context: RPC[192.168.1.1:50052] compute buffer size = 368.00 MiB
- llama_context: CPU compute buffer size = 368.01 MiB
- llama_context: graph nodes = 1782
- llama_context: graph splits = 3
- common_init_from_params: setting dry_penalty_last_n to ctx_size = 4096
- common_init_from_params: warming up the model with an empty run - please wait ... (--no-warmup to disable)
- main: llama threadpool init, n_threads = 8
- main: chat template is available, enabling conversation mode (disable it with -no-cnv)
- main: chat template example:
- You are a helpful assistant
- <|User|>Hello<|Assistant|>Hi there<|end▁of▁sentence|><|User|>How are you?<|Assistant|>
- system_info: n_threads = 8 (n_threads_batch = 8) / 8 | CPU : NEON = 1 | ARM_FMA = 1 | FP16_VA = 1 | DOTPROD = 1 | LLAMAFILE = 1 | OPENMP = 1 | AARCH64_REPACK = 1 |
- main: interactive mode on.
- sampler seed: 3508949674
- > 你好
- <think>
- </think>
- 你好!很高兴见到你,有什么我可以帮忙的吗?无论是学习、工作还是生活中的问题,都可以告诉我哦! 😊
- >
- llama_perf_sampler_print: sampling time = 13.61 ms / 36 runs ( 0.38 ms per token, 2644.34 tokens per second)
- llama_perf_context_print: load time = 68480.58 ms
- llama_perf_context_print: prompt eval time = 31415.90 ms / 5 tokens ( 6283.18 ms per token, 0.16 tokens per second)
- llama_perf_context_print: eval time = 329027.01 ms / 31 runs (10613.77 ms per token, 0.09 tokens per second)
- llama_perf_context_print: total time = 516102.31 ms / 36 tokens
- Interrupted by user
复制代码
有点慢,是因为只有两个设备,在打开rpc之前单个手机无法运行模型(因为内存不足够,14B模型Q8之后占用14GB,而我的手机只有8G内存/台,如果打开mmap技术,就会卡到无法生成)
另外提供软件要求: android 12+的设备可以用gpu参与计算
只要是linux为内核的系统(linux发行版和android设备),理论可以用cpu参与计算
|
|