基于红米K40配置termux
红米K40信息
- cpu: 骁龙870
- 内存: 12+6 G
- 存储: 256 G
- 系统: 澎湃OS 1.0.6
复制代码 1. 下载termux
- # 最新版本列表
- https://github.com/termux/termux-app/releases
- # 下载地址(我选择了一个通用的版本,不过系统版本必须>=android 7.0)
- https://github.com/termux/termux-app/releases/download/v0.119.0-beta.3/termux-app_v0.119.0-beta.3+apt-android-7-github-debug_universal.apk
复制代码 下载完毕,直接安装到手机即可- # 内核信息
- uname -a
- # Linux localhost 4.19.157-perf-g92c089fc2d37 #1 SMP PREEMPT Wed Jun 5 13:27:08 UTC 2024 aarch64 Android
复制代码 2. 修改国内源
- # termux-change-repo -> Single mirror -> Mirrors in Chinese Mainland
- termux-change-repo
- # 更新apt资源列表
- apt update && apt upgrade
- # 获取存储权限
- # 运行该命令后,你可以通过 ~/storage 目录访问存储
- termux-setup-storage
- # 唤醒锁,防止设备休眠
- termux-wake-lock
- # 可选
- # 安装 与 root 相关的软件包
- pkg install root-repo
- # 安装 包含图形界面支持的工具
- pkg install x11-repo
复制代码 3. 安装远程登录工具 openssh
- # 安装 openssh
- apt install openssh
- # 配置系统密码,同时也是ssh 远程登录密码
- passwd
- # 启动 sshd 服务, 注意:这里的sshd默认使用端口号 8022 而不是通用的 22
- # 如果修改了系统密码,需要重新启动sshd
- sshd
- # 查看系统ip
- ifconfig
- Warning: cannot open /proc/net/dev (Permission denied). Limited output.
- lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
- inet 127.0.0.1 netmask 255.0.0.0
- unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 txqueuelen 1000 (UNSPEC)
- wlan0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
- inet 192.168.1.123 netmask 255.255.255.0 broadcast 192.168.1.255
- unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 txqueuelen 3000 (UNSPEC)
- # 查看用户名
- whoami
- #返回 u0_a401,这里每个人的用户名都不一样
- # 使用电脑远程登录
- ssh u0_a401@192.168.1.123 -p 8022
- # 配置sshd自启动
- pkg i termux-services
- sv-enable sshd
复制代码 4. 安装debian虚拟机(自行选择自己习惯的linux系统)
- # proot 是一种容器技术,用于创建独立的运行环境
- # 安装 proot-disto
- apt install proot-distro
- # 查看帮助文档
- proot-distro --help
- ...
- List of the available commands:
- help - Show this help information.
- backup - Backup a specified distribution.
- install - Install a specified distribution.
- list - List supported distributions and their
- installation status.
- login - Start login shell for the specified distribution.
- remove - Delete a specified distribution.
- WARNING: this command destroys data!
- rename - Rename installed distribution.
- reset - Reinstall from scratch a specified distribution.
- WARNING: this command destroys data!
- restore - Restore a specified distribution.
- WARNING: this command destroys data!
- clear-cache - Clear cache of downloaded files.
- copy - Copy files from/to distribution.
- ...
- # 查看支持的linux
- proot-distro list
- Supported distributions (format: name < alias >):
- * Adélie Linux < adelie >
- * Alpine Linux < alpine >
- * Arch Linux < archlinux >
- * Artix Linux < artix >
- * Chimera Linux < chimera >
- * Debian (bookworm) < debian >
- * deepin < deepin >
- * Fedora < fedora >
- * Manjaro < manjaro >
- * OpenSUSE < opensuse >
- * Pardus < pardus >
- * Rocky Linux < rockylinux >
- * Ubuntu (24.04) < ubuntu >
- * Void Linux < void >
- Install selected one with: proot-distro install
- # 安装debian (从github上拉取资源,比较慢)
- proot-distro install debian
- # 查看可用liunx和安装linux (可以看到,Debian的安装状态是 yes)
- proot-distro list --verbose
- ...
- * Debian (bookworm)
- Alias: debian
- Installed: yes
- Comment: Stable release.
- Architectures: aarch64, i686, arm, x86_64
- ...
- # 进入debian
- proot-distro login debian
- root@localhost:~# uname -a
- Linux localhost 6.2.1-PRoot-Distro #1 SMP PREEMPT Wed Jun 5 13:27:08 UTC 2024 aarch64 GNU/Linux
- #debian bookworm 即 debian 12, 可以看到,它的内核是 6.2.1 版本的,和宿主机termux 的 4.19.157 不同
复制代码 5. 部署ai模型
ollama 官网 https://ollama.com/- # 进入debian
- proot-distro login debian
- # 安装ollama
- curl -fsSL https://ollama.com/install.sh | sh
- ...
- >>> Cleaning up old version at /usr/local/lib/ollama
- >>> Installing ollama to /usr/local
- >>> Downloading Linux arm64 bundle
- ######################################################################## 100.0%
- lspci: Unknown option 'd' (see "lspci --help")
- lspci: Unknown option 'd' (see "lspci --help")
- >>> The Ollama API is now available at 127.0.0.1:11434.
- >>> Install complete. Run "ollama" from the command line.
- WARNING: No NVIDIA/AMD GPU detected. Ollama will run in CPU-only mode.
- >>> The Ollama API is now available at 127.0.0.1:11434.
- >>> Install complete. Run "ollama" from the command line.
- # 查看 ollama 选项
- root@localhost:/usr/local# ollama --help
- Large language model runner
- Usage:
- ollama [flags]
- ollama [command]
- Available Commands:
- serve Start ollama
- create Create a model from a Modelfile
- show Show information for a model
- run Run a model
- stop Stop a running model
- pull Pull a model from a registry
- push Push a model to a registry
- list List models
- ps List running models
- cp Copy a model
- rm Remove a model
- help Help about any command
- Flags:
- -h, --help help for ollama
- -v, --version Show version information
- Use "ollama [command] --help" for more information about a command.
- # 脚本
- start_ollama.sh
- # 内容如下
- #!/bin/bash
- export OLLAMA_HOST=0.0.0.0
- nohup ollama serve > ollama.log 2>&1 &
- # 运行
- chmox +x ./start_ollama.sh
- # 拉取模型
- # 从 ollama 官网上找相应的模型名称,比如 https://ollama.com/library/deepseek-r1
- ollama pull deepseek-r1:1.5b
- # 查看模型列表
- ollama list
- ...
- NAME ID SIZE MODIFIED
- deepseek-r1:1.5b e0979632db5a 1.1 GB 37 seconds ago\
- ...
- # 运行
- ollama run deepseek-r1:1.5b
- # 后续会考虑如何使用芯片gpu加速
复制代码 注意:这里只是当作实验性质的部署,在没有任何ollama性能配置情况下(比如cpu使用率),会把8个cpu核心吃满(cpu占用率%795),并且速度挺慢的,可以当作openai接口调试用。如果正式使用,估计需要想办法调用gpu加速。
正点原子在rk3588上的部署可能会有一定启发,但是还是需要探索验证,当然也不一定适配骁龙870 Adreno 650 gpu https://alientek.yuque.com/mlv64o/gcrfbv/kaznwt5nv2lvug7g?singleDoc#
关于Adren gpu 性能:探究高通Adreno GPU的性能
参考
[1]. 旧手机秒变 AI 神器:DeepSeek 离线部署,搭建个人网站,私人网盘。。。
[2]. Termux 使用指南
[3]. 在android搭建个人的文件中心(3)--Termux中安装ssh服务器并用作sftp服务器
来源:程序园用户自行投稿发布,如果侵权,请联系站长删除
免责声明:如果侵犯了您的权益,请联系站长,我们会及时删除侵权内容,谢谢合作! |