树莓派超算
最近有个分布式计算范式的预研项目,所以搞来了一堆树莓派搭建了一个小的HPC集群。为了和上层应用兼容准备学习一下MPI,另外也为了Ruta和边缘计算及容器网络适配,记录一些东西也顺便分享给大家.
硬件安装
某宝买了9个Raspberry Pi4 8GB
的版本,然后顺便买了点架子,长时间运行散热片和风扇还是需要的..
单层的安装如下:
组装好了以后:
还混入了一个异类~ 对滴,Google的TPU.
最后接到交换机上, 随便找了一个能SPAN的千兆交换机而已…
最满意的是这个电源,每个RPI配个充电头太烦了,这个机器解决了大问题
软件安装
系统的安装教程参考一下连接:
https://ubuntu.com/tutorials/how-to-install-ubuntu-on-your-raspberry-pi
当然我这么懒的人只会安装一台,剩下的全克隆... 先看看单台的安装步骤, 登录默认用户名密码为ubuntu:ubuntu,配置网络
sudo netplan generate
sudo vim /etc/netplan/50-cloud-init.yamlyaml文件内容如下:
network:
ethernets:
eth0:
addresses:
- 192.168.99.151/24
dhcp4: false
gateway4: 192.168.99.1
nameservers:
addresses:
- 8.8.8.8
search: []
version: 2接下来安装MPI库,我们这次选择了OpenMPI做实验,编译过程如下:
sudo apt install build-essential
wget https://download.open-mpi.org/release/open-mpi/v4.1/openmpi-4.1.0.tar.gz
tar vzxf openmpi-4.1.0.tar.gz
cd openmpi-4.1.0/
./configure
make -j4
sudo make install完成后,再安装一下NFS用于以后共享MPI的
hostfile
和其它数据sudo apt install nfs-kernel-server
修改一下
/etc/hots
192.168.99.151 rpi1
192.168.99.152 rpi2
192.168.99.153 rpi3
192.168.99.154 rpi4
192.168.99.155 rpi5
192.168.99.156 rpi6
192.168.99.157 rpi7
192.168.99.158 rpi8
192.168.99.159 rpi9制作镜像
第一台配置完成后,关机,拔下TF卡,插到台式机上, 将整盘写到硬盘上,推荐安装一个dc3dd,能看到进度不错
sudo dc3dd if=/dev/sdb of=rpi.img
dc3dd 7.2.646 started at 2021-04-12 14:00:41 +0800
compiled options:
command line: dc3dd if=/dev/sdb of=rpi.img
device size: 249737216 sectors (probed), 127,865,454,592 bytes
sector size: 512 bytes (probed)
127865454592 bytes ( 119 G ) copied ( 100% ), 3215 s, 38 M/s
input results for device `/dev/sdb':
249737216 sectors in
0 bad sectors replaced by zeros
output results for file `rpi.img':
249737216 sectors out
dc3dd completed at 2021-04-12 14:54:16 +0800制作出来的镜像就是TF卡的大小128G....当然我不会那么蠢直接写到其它8张卡上,有个东西叫
pishrink
wget https://raw.githubusercontent.com/Drewsif/PiShrink/master/pishrink.sh
chmod a+x pishrink.sh
sudo mv pishrink.sh /usr/local/bin使用pishrink可以根据分区表的结构将未使用的空间裁掉,最后整个镜像文件降到了4.4G
zartbot@zartbotWS:~$ sudo pishrink.sh rpi.img
pishrink.sh v0.1.2
pishrink.sh: Gathering data ...
pishrink.sh: Checking filesystem ...
writable: Inode 7344 extent tree (at level 1) could be shorter. IGNORED.
writable: Inode 23208 extent tree (at level 1) could be shorter. IGNORED.
writable: Inode 28025 extent tree (at level 1) could be shorter. IGNORED.
writable: Inode 28259 extent tree (at level 1) could be shorter. IGNORED.
writable: 134805/7531920 files (0.1% non-contiguous), 1277736/31151355 blocks
resize2fs 1.45.5 (07-Jan-2020)
pishrink.sh: Shrinking filesystem ...
resize2fs 1.45.5 (07-Jan-2020)
Resizing the filesystem on /dev/loop19 to 1077398 (4k) blocks.
Begin pass 2 (max = 161413)
Relocating blocks XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Begin pass 3 (max = 951)
Scanning inode table XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Begin pass 4 (max = 19579)
Updating inode references XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
The filesystem on /dev/loop19 is now 1077398 (4k) blocks long.
pishrink.sh: Shrinking image ...
pishrink.sh: Shrunk rpi.img from 120G to 4.4G ...然后就是dd再克隆写到其它卡上,然后每写完一张,boot起来改hostname和IP地址咯.
OpenMPI
我们将rpi1定义为主节点,NFS共享一个目录给其它节点:
mkdir /home/ubuntu/mpi
sudo vi /etc/exports
其它节点修改
fstab
加载分区
SSH免密码登录
重启rpi2~rpi9,然后我们就可以开始验证MPI了. 在共享目录创建一个
hostfile
vim /home/ubuntu/mpi/hostfile
文件内容如下
rpi1 slots=4
rpi2 slots=4
rpi3 slots=4
rpi4 slots=4
rpi5 slots=4
rpi6 slots=4
rpi7 slots=4
rpi8 slots=4
rpi9 slots=4然后我们来写第一个MPI程序, 以OpenMPI的Hello为例:
/*
* Copyright (c) 2004-2006 The Trustees of Indiana University and Indiana
* University Research and Technology
* Corporation. All rights reserved.
* Copyright (c) 2006 Cisco Systems, Inc. All rights reserved.
*
* Sample MPI "hello world" application in C
*/
最开始是MPI_Init初始化MPI环境,然后
MPI_Comm_rank
可以获得每个节点的在整个系统中的编号(rank),MPI_Comm_size
为整个集群中的节点数,然后获取一个版本打印消息..编译:
ubuntu@rpi1:~/mpi$ mpicc hello.c -o hello
执行,可以看到36个节点都上线了
ubuntu@rpi1:~/mpi$ mpirun --allow-run-as-root -npernode 4 -hostfile /home/ubuntu/mpi/hostfile hello
Hello, world, I am 1 of 36, (Open MPI v4.1.0, package: Open MPI ubuntu@rpi1 Distribution, ident: 4.1.0, repo rev: v4.1.0, Dec 18, 2020, 106)
Hello, world, I am 3 of 36, (Open MPI v4.1.0, package: Open MPI ubuntu@rpi1 Distribution, ident: 4.1.0, repo rev: v4.1.0, Dec 18, 2020, 106)
Hello, world, I am 0 of 36, (Open MPI v4.1.0, package: Open MPI ubuntu@rpi1 Distribution, ident:
<....>
Hello, world, I am 23 of 36, (Open MPI v4.1.0, package: Open MPI ubuntu@rpi1 Distribution, ident: 4.1.0, repo rev: v4.1.0, Dec 18, 2020, 106)
Hello, world, I am 13 of 36, (Open MPI v4.1.0, package: Open MPI ubuntu@rpi1 Distribution, ident: 4.1.0, repo rev: v4.1.0, Dec 18, 2020, 106)vscode for MPI
可以在自己的工作电脑上安装OpenMPI,然后安装vscode和C/C++ Intellisense扩展,然后在工作区的
.vscode
目录定义c_cpp_properties.json
{
"configurations": [
{
"name": "Linux",
"includePath": [
"${workspaceFolder}/**",
"/usr/include"
],
"defines": [],
"compilerPath": "/usr/bin/mpicc",
"cStandard": "c11",
"cppStandard": "gnu++14",
"intelliSenseMode": "clang-x64"
}
],
"version": 4
}这样写code就方便不少了, 下一期开始学习一些MPI的东西了...
《树莓派超算》来自互联网,仅为收藏学习,如侵权请联系删除。本文URL:http://www.bookhoes.com/123.html