准备工作
- 虚拟机/服务器两台
- 使用系统为CentOS7,部分过程有RedHat
前期预备
以下内容两台节点都需要操作
1.更换yum源
检查该源的指定软件包
[root@localhost ~]# yum list | grep net-tools
net-tools.x86_64 2.0-0.62.20160912git.el9 @anaconda
net-tools.x86_64 2.0-0.64.20160912git.el9 baseos
2.安装net-tools软件包
[root@controller ~]# yum install -y net-tools.x86_64
正在更新 Subscription Management 软件仓库。
无法读取客户身份
本系统尚未在权利服务器中注册。可使用 subscription-manager 进行注册。
上次元数据过期检查:0:33:37 前,执行于 2024年10月09日 星期三 10时17分19秒。
软件包 net-tools-2.0-0.62.20160912git.el9.x86_64 已安装。
依赖关系解决。
===========================================================
软件包 架构 版本 仓库 大小
===========================================================
升级:
net-tools x86_64 2.0-0.64.20160912git.el9 baseos 308 k
事务概要
===========================================================
升级 1 软件包
总下载:308 k
下载软件包:
net-tools-2.0-0.64.2016091 631 kB/s | 308 kB 00:00
-----------------------------------------------------------
总计 629 kB/s | 308 kB 00:00
运行事务检查
事务检查成功。
运行事务测试
事务测试成功。
运行事务
准备中 : 1/1
升级 : net-tools-2.0-0.64.20160912git.el9.x86_6 1/2
运行脚本: net-tools-2.0-0.64.20160912git.el9.x86_6 1/2
清理 : net-tools-2.0-0.62.20160912git.el9.x86_6 2/2
运行脚本: net-tools-2.0-0.62.20160912git.el9.x86_6 2/2
验证 : net-tools-2.0-0.64.20160912git.el9.x86_6 1/2
验证 : net-tools-2.0-0.62.20160912git.el9.x86_6 2/2
已更新安装的产品。
已升级:
net-tools-2.0-0.64.20160912git.el9.x86_64
完毕!
3.修改Linux系统主机名称
方法一
命令直接修改
hostnamectl set-hostname 你的主机名称
重启或使用以下命令应用更改
systemctl restart network
最后重新连接即可
方法二
修改/etc/hostname的内容
vi /etc/hostname
删除里面的内容直接修改为你要显示你名称即可
重启或使用以下命令应用更改
systemctl restart network
最后重新连接即可
4.创建地址映射
修改vi /etc/hosts
文件,在文件底部加入ip 映射名称
vi /etc/hosts
添加上两台对应的地址和名称
保存退出,检查是否是否生效
5.关闭系统防火墙
# 关闭开机自启
[root@controller ~]# systemctl disable firewalld
# 关闭防火墙
[root@controller ~]# systemctl stop firewalld
# 检查防火墙状态
[root@controller ~]# systemctl status firewalld
○ firewalld.service - firewalld - dynamic firewall daemon
Loaded: loaded (/usr/lib/systemd/system/firewalld.ser>
Active: inactive (dead)
Docs: man:firewalld(1)
一、Chrony时间同步服务
1.修改服务端配置文件
以下都在控制节点controller执行
进入文件目录vi /etc/chrony.conf
添加允许controller的nat模式网络通行命令:allow 192.168.32.0/24
修改为自己的ip段
2.修改客户端配置文件
以下操作在计算节点compute执行
进入文件目录vi /etc/chrony.conf
删除原有的4条默认配置,添加指向控制节点的配置:server controller iburst
controller修改为你之前添加的映射的名称
3.设置NTP服务开机启动
以下操作两台都需要执行
# 设置开机自启
[root@controller ~]# systemctl enable chronyd
# 重启ntp服务
[root@controller ~]# systemctl restart chronyd
# 查看运行状态
[root@controller ~]# systemctl status chronyd
● chronyd.service - NTP client/server
Loaded: loaded (/usr/lib/systemd/system/chronyd.service; enabled; preset: enabled)
Active: active (running) since Wed 2024-10-09 11:17:34 CST; 1s ago
Docs: man:chronyd(8)
man:chrony.conf(5)
Process: 42425 ExecStart=/usr/sbin/chronyd $OPTIONS (code=exited, status=0/SUCCESS)
Main PID: 42427 (chronyd)
Tasks: 1 (limit: 22744)
Memory: 1.0M
CPU: 22ms
CGroup: /system.slice/chronyd.service
└─42427 /usr/sbin/chronyd -F 2
10月 09 11:17:34 controller systemd[1]: Starting NTP client/server...
10月 09 11:17:34 controller chronyd[42427]: chronyd version 4.3 starting (+CMDMON +NTP +REFCLOCK +RTC +PRI>
10月 09 11:17:34 controller chronyd[42427]: Frequency 1.170 +/- 3.317 ppm read from /var/lib/chrony/drift
10月 09 11:17:34 controller chronyd[42427]: Using right/UTC timezone to obtain leap second data
10月 09 11:17:34 controller chronyd[42427]: Loaded seccomp filter (level 2)
10月 09 11:17:34 controller systemd[1]: Started NTP client/server.
lines 1-19/19 (END)...skipping...
4.测试与控制节点的时间同步
计算节点执行
[root@compute ~]# chronyc sources
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^* controller 2 6 17 6 +290us[ +118us] +/- 16ms
二、安装Openstack云计算平台框架
以下操作控制节点和计算节点都执行
1.查看安装包
# 查看openstack的版本
[root@controller ~]# yum list |grep openstack
buildbot-master-openstack.noarch 3.11.7-1.el9 epel
centos-release-openstack-antelope.noarch 1-5.el9s extras-common
centos-release-openstack-bobcat.noarch 1-2.el9s extras-common
centos-release-openstack-caracal.noarch 1-2.el9s extras-common
centos-release-openstack-dalmatian.noarch 1-1.el9s extras-common
centos-release-openstack-yoga.noarch 1-4.el9s extras-common
centos-release-openstack-zed.noarch 1-4.el9s extras-common
centos-release-openstackclient-xena.noarch 1-1.el9s extras-common
ha-openstack-support.x86_64 4.10.0-28.el9 appstream
python-openstackclient-doc.noarch 6.2.0-4.el9 epel
python-openstackclient-lang.noarch 6.2.0-4.el9 epel
python-openstackdocstheme-doc.noarch 3.0.0-3.el9 epel
python3-mrack-openstack.noarch 1.19.0-1.el9 epel
python3-openstackclient.noarch 6.2.0-4.el9 epel
python3-openstackdocstheme.noarch 3.0.0-3.el9 epel
python3-openstacksdk.noarch 1.0.1-5.el9 epel
python3-openstacksdk-tests.noarch 1.0.1-5.el9 epel
resalloc-openstack.noarch
2.选择一个版本进行安装
yum install -y centos-release-openstack-(版本).noarch
例如
yum install -y centos-release-openstack-antelope.noarch
等待安装完成
[root@controller ~]# yum install -y centos-release-openstack-antelope.noarch
正在更新 Subscription Management 软件仓库。
无法读取客户身份
This system is not registered with an entitlement server. You can use "rhc" or "subscription-manager" to register.
上次元数据过期检查:0:12:44 前,执行于 2024年10月15日 星期二 15时04分48秒。
依赖关系解决。
===============================================================================================================================================================================
软件包 架构 版本 仓库 大小
===============================================================================================================================================================================
安装:
centos-release-openstack-antelope noarch 1-5.el9s extras-common 8.2 k
安装依赖关系:
centos-gpg-keys noarch 9.0-26.el9 baseos 13 k
centos-release-automotive noarch 9.0-8.el9iv extras-common 12 k
centos-release-ceph-quincy noarch 1.0-2.el9s extras-common 7.4 k
centos-release-cloud noarch 1-1.el9s extras-common 7.9 k
centos-release-messaging noarch 1-4.el9s extras-common 8.4 k
centos-release-nfv-common noarch 1-5.el9s extras-common 7.8 k
centos-release-nfv-openvswitch noarch 1-5.el9s extras-common 7.5 k
centos-release-rabbitmq-38 noarch 1-4.el9s extras-common 7.4 k
centos-release-storage-common noarch 2-5.el9s extras-common 8.3 k
centos-stream-repos noarch 9.0-26.el9 baseos 10 k
事务概要
===============================================================================================================================================================================
安装 11 软件包
总下载:99 k
安装大小:30 k
下载软件包:
(1/11): centos-stream-repos-9.0-26.el9.noarch.rpm 94 kB/s | 10 kB 00:00
(2/11): centos-gpg-keys-9.0-26.el9.noarch.rpm 36 kB/s | 13 kB 00:00
(3/11): centos-release-cloud-1-1.el9s.noarch.rpm 105 kB/s | 7.9 kB 00:00
(4/11): centos-release-messaging-1-4.el9s.noarch.rpm 117 kB/s | 8.4 kB 00:00
(5/11): centos-release-nfv-common-1-5.el9s.noarch.rpm 43 kB/s | 7.8 kB 00:00
(6/11): centos-release-nfv-openvswitch-1-5.el9s.noarch.rpm 235 kB/s | 7.5 kB 00:00
(7/11): centos-release-openstack-antelope-1-5.el9s.noarch.rpm 129 kB/s | 8.2 kB 00:00
(8/11): centos-release-rabbitmq-38-1-4.el9s.noarch.rpm 226 kB/s | 7.4 kB 00:00
(9/11): centos-release-storage-common-2-5.el9s.noarch.rpm 126 kB/s | 8.3 kB 00:00
(10/11): centos-release-ceph-quincy-1.0-2.el9s.noarch.rpm 8.6 kB/s | 7.4 kB 00:00
(11/11): centos-release-automotive-9.0-8.el9iv.noarch.rpm 3.4 kB/s | 12 kB 00:03
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
总计 27 kB/s | 99 kB 00:03
运行事务检查
事务检查成功。
运行事务测试
下载的软件包保存在缓存中,直到下次成功执行事务。
您可以通过执行 'yum clean packages' 删除软件包缓存。
错误:事务测试失败:
file /etc/redhat-release from install of centos-release-automotive-9.0-8.el9iv.noarch conflicts with file from package redhat-release-9.3-0.5.el9.x86_64
file /etc/system-release from install of centos-release-automotive-9.0-8.el9iv.noarch conflicts with file from package redhat-release-9.3-0.5.el9.x86_64
file /etc/system-release-cpe from install of centos-release-automotive-9.0-8.el9iv.noarch conflicts with file from package redhat-release-9.3-0.5.el9.x86_64
file /usr/lib/os-release from install of centos-release-automotive-9.0-8.el9iv.noarch conflicts with file from package redhat-release-9.3-0.5.el9.x86_64
file /usr/lib/rpm/macros.d/macros.dist from install of centos-release-automotive-9.0-8.el9iv.noarch conflicts with file from package redhat-release-9.3-0.5.el9.x86_64
file /usr/lib/systemd/system-preset/90-default.preset from install of centos-release-automotive-9.0-8.el9iv.noarch conflicts with file from package redhat-release-9.3-0.5.el9.x86_64
3.安装openstack云计算平台客户端
查看安装包
[root@controller yum.repos.d]# yum list |grep openstackclient
centos-release-openstackclient-xena.noarch 1-1.el9s extras-common
python-openstackclient-doc.noarch 6.2.0-4.el9 epel
python-openstackclient-lang.noarch 6.2.0-4.el9 epel
python3-openstackclient.noarch
安装
yum -y install python3-openstackclient.noarch
查看安装版本
[root@controller yum.repos.d]# openstack --version
openstack 6.2.0
4.下载和安装RDO仓库RPM来启用OpenStack仓库
使用yum安装rdo包,我已经全部下载下来了,可以使用我的链接,或者去centos.org手动下载安装
yum install -y https://cdn.b52m.cn/rdo-release/(版本).noarch.rpm
比如我前面下的是centos-release-openstack-antelope.noarch
则找到对应的rdo版本为rdo-release-antelope-2.el9s
yum install -y https://cdn.b52m.cn/rdo-release/rdo-release-antelope-2.el9s.noarch.rpm
名称 | 构建时间 |
rdo-release-dalmatian-1.el9s | 2024-09-30 12:21:12 |
rdo-release-caracal-1.el9s | 2024-04-16 08:25:25 |
rdo-release-bobcat-1.el9s | 2023-10-17 13:13:18 |
rdo-release-yoga-2.el9s | 2023-04-14 06:49:33 |
rdo-release-zed-2.el9s | 2023-04-14 06:44:28 |
rdo-release-antelope-2.el9s | 2023-04-12 16:26:19 |
rdo-release-antelope-1.el9s | 2023-03-21 13:19:33 |
rdo-release-zed-1.el9s | 2022-10-24 13:45:18 |
rdo-release-yoga-1.el9s | 2022-04-06 15:14:14 |
rdo-release-yoga-1.el8 | 2022-04-06 15:08:00 |
rdo-release-ussuri-4.el8 | 2022-02-03 15:08:56 |
rdo-release-victoria-4.el8 | 2022-02-02 11:22:56 |
rdo-release-wallaby-2.el8 | 2022-02-02 09:05:04 |
rdo-release-xena-2.el8 | 2022-02-02 07:41:30 |
rdo-release-xena-1.el8 | 2021-10-18 09:36:21 |
rdo-release-wallaby-1.el8 | 2021-04-15 14:31:15 |
rdo-release-train-4.el8 | 2021-03-05 11:42:46 |
rdo-release-victoria-3.el8 | 2021-03-05 10:31:27 |
rdo-release-ussuri-3.el8 | 2021-03-05 10:31:18 |
rdo-release-victoria-2.el8 | 2020-11-18 09:09:53 |
rdo-release-ussuri-2.el8 | 2020-11-18 08:11:21 |
rdo-release-victoria-1.el8 | 2020-11-11 09:54:16 |
rdo-release-victoria-0.el8 | 2020-10-06 08:04:28 |
rdo-release-ussuri-1.el8 | 2020-05-12 09:33:44 |
rdo-release-ussuri-0.el8 | 2020-05-07 16:34:01 |
rdo-release-train-3.el8 | 2020-04-13 14:40:22 |
rdo-release-train-2.el8 | 2020-04-06 13:36:53 |
rdo-release-train-1 | 2019-10-16 07:22:33 |
rdo-release-train-0.1 | 2019-10-07 19:04:54 |
rdo-release-train-0 | 2019-10-07 15:45:16 |
rdo-release-stein-3 | 2019-09-19 16:15:36 |
rdo-release-queens-2 | 2019-09-19 10:55:06 |
rdo-release-rocky-2 | 2019-09-19 10:34:53 |
rdo-release-stein-2 | 2019-05-23 09:31:44 |
rdo-release-stein-1 | 2019-05-08 17:15:35 |
rdo-release-stein-0 | 2019-03-28 15:20:06 |
rdo-release-rocky-1 | 2019-03-10 13:06:39 |
rdo-release-queens-1 | 2018-02-28 11:19:44 |
rdo-release-queens-0 | 2018-02-22 15:56:07 |
rdo-release-pike-1 | 2017-08-31 22:06:40 |
rdo-release-pike-0 | 2017-08-28 11:22:33 |
rdo-release-mitaka-7 | 2017-06-13 23:18:42 |
rdo-release-newton-5 | 2017-05-15 22:12:43 |
rdo-release-ocata-3 | 2017-05-15 22:09:00 |
rdo-release-ocata-2 | 2017-02-22 16:19:26 |
rdo-release-ocata-1 | 2017-02-17 23:43:11 |
rdo-release-ocata-0 | 2016-11-09 23:15:00 |
rdo-release-newton-4 | 2016-11-09 23:01:19 |
rdo-release-mitaka-6 | 2016-11-09 22:32:22 |
rdo-release-newton-3 | 2016-10-06 13:31:54 |
rdo-release-newton-2 | 2016-09-09 10:25:36 |
rdo-release-newton-1 | 2016-09-01 23:20:38 |
rdo-release-newton-0 | 2016-07-07 21:48:11 |
rdo-release-liberty-5 | 2016-06-14 11:31:35 |
rdo-release-mitaka-5 | 2016-06-13 22:13:13 |
rdo-release-mitaka-3 | 2016-04-22 13:47:36 |
rdo-release-liberty-3 | 2016-04-22 13:38:47 |
rdo-release-kilo-2 | 2016-04-22 13:11:07 |
rdo-release-mitaka-2 | 2016-04-11 08:24:06 |
rdo-release-mitaka-1 | 2016-04-04 19:15:44 |
5.安装selinux管理包
修改selinux策略
vi /etc/selinux/config
找到SELINUX修改为disabled
SELINUX=disabled
查找包
[root@controller yum.repos.d]# yum list |grep openstack-selinux
openstack-selinux.noarch 0.8.40-1.el9s @openstack-antelope
openstack-selinux-devel.noarch 0.8.40-1.el9s openstack-antelope
openstack-selinux-test.noarch 0.8.40-1.el9s openstack-antelope
安装openstack-selinux包
yum -y install openstack-selinux.noarch
1、查找train版本包
[root@controller ~]# yum list |grep train
centos-release-openstack-train.noarch 1-1.el7.centos extras
2、安装train框架
yum -y install centos-release-openstack-train.noarch
先进入yum源配置目录:
cd /etc/yum.repos.d/
删除新生成的配置,写入自定义的配置
rm CentOS-Ceph-Nautilus.repo
rm CentOS-NFS-Ganesha-28.repo
rm CentOS-OpenStack-train.repo
rm CentOS-QEMU-EV.repo
rm CentOS-Storage-common.repo
自定义新的配置文件
vi openstack.repo
写入以下配置,保存
[base]
name=base
baseurl=http://repo.huaweicloud.com/centos/7/os/x86_64/
enable=1
gpgcheck=0
[extras]
name=extras
baseurl=http://repo.huaweicloud.com/centos/7/extras/x86_64/
enable=1
gpgcheck=0
[updates]
name=updates
baseurl=http://repo.huaweicloud.com/centos/7/updates/x86_64/
enable=1
gpgcheck=0
[train]
name=train
baseurl=http://repo.huaweicloud.com/centos/7/cloud/x86_64/openstack-train/
enable=1
gpgcheck=0
[virt]
name=virt
baseurl=http://repo.huaweicloud.com/centos/7/virt/x86_64/kvm-common/
enable=1
gpgcheck=0
最后重新生成索引和缓存
yum clean all
yum makecache
3、安装openstack云计算平台客户端
yum -y install python2-openstackclient.noarch
安装完毕查看版本信息
openstack --version
4、安装openstack selinux管理包
yum install -y openstack-selinux.noarch
三、openstack第三方服务
以下只需要安装在控制节点
1.MariaDB数据库服务
1.MariaDB数据库服务
- MariaDB主要用于存储用户、角色、网络等信息
查询软件包
查询mariadb-server包
[root@controller ~]# yum list |grep mariadb-server
mariadb-server.x86_64 3:10.5.22-1.el9 appstream
mariadb-server-galera.x86_64 3:10.5.22-1.el9 appstream
mariadb-server-utils.x86_64 3:10.5.22-1.el9 appstream
查询PyMySQL包
[root@controller ~]# yum list |grep PyMySQL
python3-PyMySQL.noarch 0.10.1-6.el9 appstream
python3.11-PyMySQL.noarch 1.0.2-2.el9 appstream
python3.11-PyMySQL+rsa.noarch 1.0.2-2.el9 appstream
python3.12-PyMySQL.noarch 1.1.0-3.el9 appstream
python3.12-PyMySQL+rsa.noarch 1.1.0-3.el9 appstream
安装软件包
yum -y install mariadb-server.x86_64 python3-PyMySQL.noarch
yum -y install mariadb-server.x86_64 python2-PyMySQL.noarch
配置文件
vi /etc/my.cnf.d/openstack.cnf
输入以下内容
[mysqld]
bind-address=192.168.32.140
default-storage-engine=innodb
innodb_file_per_table=on
max_connections=4096
collation-server=utf8_general_ci
character-set-server=utf8
启动数据库服务
# 开机自启
[root@controller ~]# systemctl enable mariadb
# 启动
[root@controller ~]# systemctl start mariadb
#查询状态
[root@controller ~]# systemctl status mariadb
● mariadb.service - MariaDB 10.5 database server
Loaded: loaded (/usr/lib/systemd/system/mariadb.service; enabled; preset: disabled)
Active: active (running) since Tue 2024-10-15 17:13:53 CST; 19s ago
Docs: man:mariadbd(8)
https://mariadb.com/kb/en/library/systemd/
Process: 125363 ExecStartPre=/usr/libexec/mariadb-check-socket (code=exited, status=0/SUCCESS)
Process: 125415 ExecStartPre=/usr/libexec/mariadb-prepare-db-dir mariadb.service (code=exited, status=0/SUCCESS)
Process: 125541 ExecStartPost=/usr/libexec/mariadb-check-upgrade (code=exited, status=0/SUCCESS)
Main PID: 125512 (mariadbd)
Status: "Taking your SQL requests now..."
Tasks: 28 (limit: 22743)
Memory: 81.9M
CPU: 347ms
CGroup: /system.slice/mariadb.service
└─125512 /usr/libexec/mariadbd --basedir=/usr
10月 15 17:13:53 controller mariadb-prepare-db-dir[125457]: The second is mysql@localhost, it has no password either, but
10月 15 17:13:53 controller mariadb-prepare-db-dir[125457]: you need to be the system 'mysql' user to connect.
10月 15 17:13:53 controller mariadb-prepare-db-dir[125457]: After connecting you can set the password, if you would need to be
10月 15 17:13:53 controller mariadb-prepare-db-dir[125457]: able to connect as any of these users with a password and without sudo
10月 15 17:13:53 controller mariadb-prepare-db-dir[125457]: See the MariaDB Knowledgebase at https://mariadb.com/kb
10月 15 17:13:53 controller mariadb-prepare-db-dir[125457]: Please report any problems at https://mariadb.org/jira
10月 15 17:13:53 controller mariadb-prepare-db-dir[125457]: The latest information about MariaDB is available at https://mariadb.org/.
10月 15 17:13:53 controller mariadb-prepare-db-dir[125457]: Consider joining MariaDB's strong and vibrant community:
10月 15 17:13:53 controller mariadb-prepare-db-dir[125457]: https://mariadb.org/get-involved/
10月 15 17:13:53 controller systemd[1]: Started MariaDB 10.5 database server.
数据库初始化
mysql_secure_installation
出现以下内容
Enter current password for root (enter for none): 直接回车
Switch to unix_socket authentication [Y/n] y
Change the root password? [Y/n] y
New password: 123456
Re-enter new password: 123456
Remove anonymous users? [Y/n] y
Disallow root login remotely? [Y/n] y
Remove test database and access to it? [Y/n] y
Reload privilege tables now? [Y/n] y
其中123456为数据库密码
过程如下
[root@controller ~]# mysql_secure_installation
NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB
SERVERS IN PRODUCTION USE! PLEASE READ EACH STEP CAREFULLY!
In order to log into MariaDB to secure it, we'll need the current
password for the root user. If you've just installed MariaDB, and
haven't set the root password yet, you should just press enter here.
Enter current password for root (enter for none):
OK, successfully used password, moving on...
Setting the root password or using the unix_socket ensures that nobody
can log into the MariaDB root user without the proper authorisation.
You already have your root account protected, so you can safely answer 'n'.
Switch to unix_socket authentication [Y/n] y
Enabled successfully!
Reloading privilege tables..
... Success!
You already have your root account protected, so you can safely answer 'n'.
Change the root password? [Y/n] y
New password:
Re-enter new password:
Password updated successfully!
Reloading privilege tables..
... Success!
By default, a MariaDB installation has an anonymous user, allowing anyone
to log into MariaDB without having to have a user account created for
them. This is intended only for testing, and to make the installation
go a bit smoother. You should remove them before moving into a
production environment.
Remove anonymous users? [Y/n] y
... Success!
Normally, root should only be allowed to connect from 'localhost'. This
ensures that someone cannot guess at the root password from the network.
Disallow root login remotely? [Y/n] y
... Success!
By default, MariaDB comes with a database named 'test' that anyone can
access. This is also intended only for testing, and should be removed
before moving into a production environment.
Remove test database and access to it? [Y/n] y
- Dropping test database...
... Success!
- Removing privileges on test database...
... Success!
Reloading the privilege tables will ensure that all changes made so far
will take effect immediately.
Reload privilege tables now? [Y/n] y
... Success!
Cleaning up...
All done! If you've completed all of the above steps, your MariaDB
installation should now be secure.
Thanks for using MariaDB!
… Success!
表示数据库初始化成功。
测试数据库
连接数据库
mysql -hlocalhost -uroot -p123456
123456为你设置的数据库密码
验证
show databases;
use mysql;
show tables;
过程如下
[root@controller ~]# mysql -hlocalhost -uroot -p123456
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 13
Server version: 10.5.22-MariaDB MariaDB Server
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
+--------------------+
3 rows in set (0.000 sec)
MariaDB [(none)]> use mysql;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Database changed
MariaDB [mysql]> show tables;
+---------------------------+
| Tables_in_mysql |
+---------------------------+
| column_stats |
| columns_priv |
| db |
| event |
| func |
| general_log |
| global_priv |
| gtid_slave_pos |
| help_category |
| help_keyword |
| help_relation |
| help_topic |
| index_stats |
| innodb_index_stats |
| innodb_table_stats |
| plugin |
| proc |
| procs_priv |
| proxies_priv |
| roles_mapping |
| servers |
| slow_log |
| table_stats |
| tables_priv |
| time_zone |
| time_zone_leap_second |
| time_zone_name |
| time_zone_transition |
| time_zone_transition_type |
| transaction_registry |
| user |
+---------------------------+
31 rows in set (0.000 sec)
MariaDB [mysql]> exit;
Bye
2.RabbiMQ消息队列服务
查询软件包
[root@controller ~]# yum list |grep rabbitmq-server
rabbitmq-server.x86_64 3.9.21-1.el9s centos-rabbitmq-38
安装软件包
yum -y install rabbitmq-server
启动服务
systemctl enable rabbitmq-server && systemctl start rabbitmq-server && systemctl status rabbitmq-server
创建管理用户密码
rabbitmqctl add_user openstack 123456
其中openstack为用户名,123456为密码
例如
[root@controller ~]# rabbitmqctl add_user openstack 123456
Adding user "openstack" ...
Done. Don't forget to grant the user permissions to some virtual hosts! See 'rabbitmqctl help set_permissions' to learn more.
设置用户管理权限
rabbitmqctl set_permissions openstack ".*" ".*" ".*"
查看权限
rabbitmqctl list_user_permissions openstack
检查服务运行状态
netstat -tnlup | grep 5672
netstat -tnlup | grep 25672
3.安装Memcached缓存
查找包
yum list |grep memcache
安装包
yum -y install memcached.x86_64 python3-memcached.noarch
yum -y install memcached.x86_64 python-memcached.noarch
修改配置
vi /etc/sysconfig/memcached
添加监听ip,为控制节点ip仅主机地址
重启服务
systemctl enable memcached && systemctl restart memcached && systemctl status memcached
检测服务
netstat -tnulp|grep memcache
11211
为memcached
服务的默认端口
4.etcd键值对存储系统
安装包
yum -y install etcd.x86_64
修改配置文件
echo 'ETCD_LISTEN_PEER_URLS="http://192.168.32.140:2380"
ETCD_LISTEN_CLIENT_URLS="http://192.168.32.140:2379,http://127.0.0.1:2379"
ETCD_NAME="controller"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.32.140:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.32.140:2379"
ETCD_INITIAL_CLUSTER="controller=http://192.168.32.140:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-01"
ETCD_INITIAL_CLUSTER_STATE="new"' > /etc/etcd/etcd.conf
重启服务
systemctl enable etcd && systemctl start etcd && systemctl status etcd
测试
netstat -tnulp|grep etcd
四、认证服务Keystone安装
Keystone在OpenStack云计算平台中就是这个认证单元,负责各个组件的认证工作。
本任务将在
1.基本概念
- 项目(project):项目是供用户使用的资源集合,不同的项目之间的资源是隔离的。
- 服务(Service):服务是项目中各个组件提供的服务。
- 端点(Endpoint):端点是一个用来访问或定位某个服务的地址。
- 用户(User):用户是任何拥有身份验证信息来使用OpenStack的实体,可以是真正的使用人、其他系统或者某一服务。
- 角色(Role):角色是预定义的权限集合。
- 认证(Authentication)、凭据(Credentials):认证是Keystone验证用户身份的过程,凭据是认证用户身份时需要的身份验证数据。
- 令牌(Token):令牌是一个加密字符串,是访问资源的“通行证”。
- 组(Group):组是部分用户的集合,通过分配角色到组,可以批量向在该组中的所有用户分配权限。
- 域(Domain):域是项目和用户的集合。
2.keystone组建架构
用户给Keystone提供凭证用于登录系统。登录成功后Keystone将签发令牌给用户。以后
3.安装和配置keystone
1.安装包
yum list |grep openstack-keystone
yum list |grep httpd
yum list |grep mod_wsgi
安装
yum install -y openstack-keystone.noarch httpd python3-mod_wsgi.x86_64
yum install -y openstack-keystone.noarch httpd mod_wsgi.x86_64
安装完查看默认用户
cat /etc/passwd|grep keystone
cat /etc/group|grep keystone
2.创建keystone数据库并授权
登录数据库
mysql -uroot -p123456
123456为之前创建的数据库密码
创建keystone的数据库
create database keystone;
查看数据库
show databases;
3.授权keysteon数据库
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY '123456';
上面两条语句把“keystone”数据库的所有表(keystone.*)的所有权限(ALL PRIVILEGES)授予本地主(’localhost’ )及任意远程主机(’%’)中名为“keystone”的用户,验证密码为“123456”。这里的“keystone”用户是安装Keystone组件时自动生成的系统用户。
4.修改keystone配置
vi /etc/keystone/keystone.conf
在database下面添加以下配置:
connection = mysql+pymysql://keystone:123456@controller/keystone
controller为控制节点名称
在token下面添加以下配置:
provider = fernet
5.初始化数据库
su keystone -s /bin/sh -c "keystone-manage db_sync"
进入数据库查看
mysql -uroot -p123456
use keystone;
show tables;
4.keystone组件初始化
初始化密钥库
keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
加密解密凭证
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
初始化用户身份认证信息
keystone-manage bootstrap --bootstrap-password 123456 --bootstrap-admin-url http://controller:5000/v3 --bootstrap-internal-url http://controller:5000/v3 --bootstrap-public-url http://controller:5000/v3 --bootstrap-region-id RegionOne
123456为数据库的密码,controller为控制节点的名称
5.配置Web服务
创建链接
ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
修改httpd配置文件
vi /etc/httpd/conf/httpd.conf
添加以下配置
ServerName controller
开启httpd服务
systemctl enable httpd && systemctl start httpd && systemctl status httpd
6.测试
导入环境变量
vi admin_login
写入以下配置:
export OS_USERNAME=admin
export OS_PASSWORD=123456 # 密码需要改
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
部分配置关键词不一样记得修改
每次运行命令之前,都需要执行以下操作:
source admin_login
创建project项目
openstack project create --domain default project
项目列表
openstack project list
创建user角色
openstack role create user
查看角色列表
openstack role list
用户列表
openstack user list
openstack domain list
五、镜像服务Glance安装
1.基本概念
Glance是镜像服务(Image Service)的项目代号,是OpenStack的核心组件之一。它和Keystone一样是一个支持WSGI协议的Web服务,用户可以通过Web访问或者用命令行控制Glance对镜像进行管理,其功能包括虚拟机镜像和快照的注册、检索、删除、权限管理等。
本任务将在
- 镜像元数据(Metadata):镜像元数据是存放在数据库中的关于镜像的相关信息,如文件名、大小、状态等字符串信息,用于快速检索。
- 镜像文件(Image File):镜像文件即镜像本身,它存储于后端存储里,所谓的后端存储就是第三方存储系统,如文件系统、Swift、S3、Cinder等。
- 磁盘格式(Disk Format):磁盘格式(Disk Format):Glance中的磁盘格式指的是镜像文件的存储格式。
qcow2:QEMU支持的一种动态可扩展并支持快照的磁盘格式,是OpenStack的常用磁盘格式
2.Glance的组件架构
所有对Glance合法的请求都会通过Glance-API这个入口,如果是对镜像元数据的处理请求,Glance-API会与数据库交互进行处理。而所有对镜像文件的操作都是通过调用存储接口执行的,因为存储接口负责与后端存储的交互。
3.安装Glance
安装包
检查是否有安装包
yum list |grep openstack-glance
安装glance
yum -y install openstack-glance.noarch
如果安装失败可以尝试安装python3-pyxattr
yum --enablerepo=crb install python3-pyxattr
运行完重新安装glance即可
检查是否创建glance的用户和组
cat /etc/passwd|grep glance
cat /etc/group|grep glance
授权数据库
mysql -uroot -p123456
create database glance;
show databases;
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY '123456';
过程如下
[root@controller ~]# mysql -uroot -p123456
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 3
Server version: 10.5.22-MariaDB MariaDB Server
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]> create database glance;
Query OK, 1 row affected (0.001 sec)
MariaDB [(none)]> show databases;
+--------------------+
| Database |
+--------------------+
| glance |
| information_schema |
| keystone |
| mysql |
| performance_schema |
+--------------------+
5 rows in set (0.004 sec)
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY '123456';
Query OK, 0 rows affected (0.001 sec)
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY '123456';
Query OK, 0 rows affected (0.001 sec)
MariaDB [(none)]> exit;
Bye
4.修改配置
备份配置文件
cp /etc/glance/glance-api.conf /etc/glance/glance-api.bak
去除无效的注释文件信息
grep -Ev '^$|#' /etc/glance/glance-api.bak > /etc/glance/glance-api.conf
添加以下参数信息
[database]
connection = mysql+pymysql://glance:123456@controller/glance
[keystone_authtoken]
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
username = glance
password = 123456
project_name = project
user_domain_name = Default
project_domain_name = Default
[paste_deploy]
flavor = keystone
[glance_store]
stores = file
default_store = file #默认存储系统为本地文件系统
filesystem_store_datadir = /var/lib/glance/images/ #镜像文件实际存储的目录
修改后如下
[DEFAULT]
[barbican]
[barbican_service_user]
[cinder]
[cors]
[database]
connection = mysql+pymysql://glance:123456@controller/glance
[file]
[glance.store.http.store]
[glance.store.rbd.store]
[glance.store.s3.store]
[glance.store.swift.store]
[glance.store.vmware_datastore.store]
[glance_store]
stores = file
default_store = file #默认存储系统为本地文件系统
filesystem_store_datadir = /var/lib/glance/images/ #镜像文件实际存储的目录
[healthcheck]
[image_format]
[key_manager]
[keystone_authtoken]
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
username = glance
password = 123456
project_name = project
user_domain_name = Default
project_domain_name = Default
[os_brick]
[oslo_concurrency]
[oslo_limit]
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_middleware]
[oslo_policy]
[oslo_reports]
[paste_deploy]
flavor = keystone
[profiler]
[store_type_location_strategy]
[task]
[taskflow_executor]
[vault]
[wsgi]
5.数据库同步
su glance -s /bin/sh -c "glance-manage db_sync"
出现successfully
即为成功
[root@controller ~]# su glance -s /bin/sh -c "glance-manage db_sync"
2024-10-30 10:24:19.527 69658 INFO alembic.runtime.migration [-] Context impl MySQLImpl.
2024-10-30 10:24:19.528 69658 INFO alembic.runtime.migration [-] Will assume non-transactional DDL.
2024-10-30 10:24:19.603 69658 INFO alembic.runtime.migration [-] Context impl MySQLImpl.
2024-10-30 10:24:19.603 69658 INFO alembic.runtime.migration [-] Will assume non-transactional DDL.
INFO [alembic.runtime.migration] Context impl MySQLImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
INFO [alembic.runtime.migration] Running upgrade -> liberty, liberty initial
INFO [alembic.runtime.migration] Running upgrade liberty -> mitaka01, add index on created_at and updated_at columns of 'images' table
INFO [alembic.runtime.migration] Running upgrade mitaka01 -> mitaka02, update metadef os_nova_server
INFO [alembic.runtime.migration] Running upgrade mitaka02 -> ocata_expand01, add visibility to images
INFO [alembic.runtime.migration] Running upgrade ocata_expand01 -> pike_expand01, empty expand for symmetry with pike_contract01
INFO [alembic.runtime.migration] Running upgrade pike_expand01 -> queens_expand01
INFO [alembic.runtime.migration] Running upgrade queens_expand01 -> rocky_expand01, add os_hidden column to images table
INFO [alembic.runtime.migration] Running upgrade rocky_expand01 -> rocky_expand02, add os_hash_algo and os_hash_value columns to images table
INFO [alembic.runtime.migration] Running upgrade rocky_expand02 -> train_expand01, empty expand for symmetry with train_contract01
INFO [alembic.runtime.migration] Running upgrade train_expand01 -> ussuri_expand01, empty expand for symmetry with ussuri_expand01
INFO [alembic.runtime.migration] Running upgrade ussuri_expand01 -> wallaby_expand01, add image_id, request_id, user columns to tasks table"
INFO [alembic.runtime.migration] Running upgrade wallaby_expand01 -> xena_expand01, empty expand for symmetry with 2023_1_expand01
INFO [alembic.runtime.migration] Running upgrade xena_expand01 -> yoga_expand01, empty expand for symmetry with 2023_1_expand01
INFO [alembic.runtime.migration] Running upgrade yoga_expand01 -> zed_expand01, empty expand for symmetry with 2023_1_expand01
INFO [alembic.runtime.migration] Running upgrade zed_expand01 -> 2023_1_expand01, empty expand for symmetry with 2023_1_expand01
INFO [alembic.runtime.migration] Context impl MySQLImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
Upgraded database to: 2023_1_expand01, current revision(s): 2023_1_expand01
INFO [alembic.runtime.migration] Context impl MySQLImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
INFO [alembic.runtime.migration] Context impl MySQLImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
Database migration is up to date. No migration needed.
INFO [alembic.runtime.migration] Context impl MySQLImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
INFO [alembic.runtime.migration] Context impl MySQLImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
INFO [alembic.runtime.migration] Running upgrade mitaka02 -> ocata_contract01, remove is_public from images
INFO [alembic.runtime.migration] Running upgrade ocata_contract01 -> pike_contract01, drop glare artifacts tables
INFO [alembic.runtime.migration] Running upgrade pike_contract01 -> queens_contract01
INFO [alembic.runtime.migration] Running upgrade queens_contract01 -> rocky_contract01
INFO [alembic.runtime.migration] Running upgrade rocky_contract01 -> rocky_contract02
INFO [alembic.runtime.migration] Running upgrade rocky_contract02 -> train_contract01
INFO [alembic.runtime.migration] Running upgrade train_contract01 -> ussuri_contract01
INFO [alembic.runtime.migration] Running upgrade ussuri_contract01 -> wallaby_contract01
INFO [alembic.runtime.migration] Running upgrade wallaby_contract01 -> xena_contract01
INFO [alembic.runtime.migration] Running upgrade xena_contract01 -> yoga_contract01
INFO [alembic.runtime.migration] Running upgrade yoga_contract01 -> zed_contract01
INFO [alembic.runtime.migration] Running upgrade zed_contract01 -> 2023_1_contract01
INFO [alembic.runtime.migration] Context impl MySQLImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
Upgraded database to: 2023_1_contract01, current revision(s): 2023_1_contract01
INFO [alembic.runtime.migration] Context impl MySQLImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
Database is synced successfully.
登录glance
数据库,查看数据包是否同步成功
mysql -uroot -p123456
use glance;
show tables;
exit;
6.Glance组件初始化
创建glance用户
openstack user create --domain default --password 123456 glance
给glance分配角色
openstack role add --project project --user glance admin
创建服务
openstack service create --name glance image
openstack endpoint create --region RegionOne glance public http://controller:9292
openstack endpoint create --region RegionOne glance internal http://controller:9292
openstack endpoint create --region RegionOne glance admin http://controller:9292
重启服务
systemctl enable openstack-glance-api && systemctl start openstack-glance-api && systemctl status openstack-glance-api
使用软件上传img文件,上传到/etc目录下
ll /etc/cirros-0.5.1-x86_64-disk.img
创建镜像
openstack image create --file /etc/cirros-0.5.1-x86_64-disk.img --disk-format qcow2 --container-format bare --public cirros
openstack image list
六、放置服务Placement安装
1.基本概念
Placement的主要组成是它的接口模块(Placement-API),该模块监控系统资源信息。
本任务将在
2.基本工作流程
- 第1步,Nova告诉Placement要创建的云主机需要什么资源、数量如何。
- 第2步,Placement从数据库中查询获得两个数据,第一个数据是现有空闲资源足以创建云主机的物理主机以及它们所剩资源信息,第二个数据是这些物理主机原有的资源信息。
- 第3步,数据库为Placement返回查询到的数据。
- 第4步,Placement将获得的两个数据告知Nova。
- 第5步,Nova用这两个数据通过算法选择好创建云主机的物理主机,并将选择的结果告诉Placement。
- 第6步,Placement修改数据库,将相应资源从该物理主机的资源中扣除。
3.安装
yum install -y openstack-placement-api.noarch
查看用户
cat /etc/passwd |grep placement
cat /etc/group |grep placement
4.创建数据库并授权
mysql -uroot -p123456
create database placement;
GRANT ALL PRIVILEGES ON placement.* TO placement@'localhost' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON placement.* TO placement@'%' IDENTIFIED BY '123456';
exit;
过程如下
[root@controller ~]# mysql -uroot -p123456
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 3
Server version: 10.5.22-MariaDB MariaDB Server
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]> create database placement;
Query OK, 1 row affected (0.001 sec)
MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO placement@'localhost' IDENTIFIED BY '123456';
Query OK, 0 rows affected (0.001 sec)
MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO placement@'%' IDENTIFIED BY '123456';
Query OK, 0 rows affected (0.001 sec)
5.配置文件修改
备份文件
cp /etc/placement/placement.conf /etc/placement/placement.bak
去除多余的注释信息
grep -Ev '^$|#' /etc/placement/placement.bak > /etc/placement/placement.conf
修改配置文件
vi /etc/placement/placement.conf
[DEFAULT]
[api]
auth_strategy = keystone
[cors]
[keystone_authtoken]
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = project
username = placement
password = 123456
[oslo_middleware]
[oslo_policy]
[placement]
[placement_database]
connection = mysql+pymysql://placement:123456@controller/placement
[profiler]
添加apache服务配置
vi /etc/httpd/conf.d/00-placement-api.conf
加入以下内容
<Directory /usr/bin>
Require all granted
</Directory>
如下
Listen 8778
<VirtualHost *:8778>
WSGIProcessGroup placement-api
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
WSGIDaemonProcess placement-api processes=3 threads=1 user=placement group=placement
WSGIScriptAlias / /usr/bin/placement-api
<IfVersion >= 2.4>
ErrorLogFormat "%M"
</IfVersion>
<Directory /usr/bin>
Require all granted
</Directory>
ErrorLog /var/log/placement/placement-api.log
#SSLEngine On
#SSLCertificateFile ...
#SSLCertificateKeyFile ...
</VirtualHost>
Alias /placement-api /usr/bin/placement-api
<Location /placement-api>
SetHandler wsgi-script
Options +ExecCGI
WSGIProcessGroup placement-api
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
</Location>
重启htppd服务
systemctl restart httpd
测试
httpd -v
6.数据库同步
su glance -s /bin/sh -c "placement-manage db sync"
如果报错可以试试添加权限
chmod 777 /etc/placement/placement.conf
查看同步
mysql -uroot -p123456
use placement;
show tables;
exit;
7.组件初始化
在根目录下导入环境变量
. admin_login
创建用户“placement”
openstack user create --domain default --password 123456 placement
创建placement服务
openstack service create --name placement placement
创建三个访问端点
openstack endpoint create --region RegionOne placement public http://controller:8778
openstack endpoint create --region RegionOne placement internal http://controller:8778
openstack endpoint create --region RegionOne placement admin http://controller:8778
8.检测服务
curl http://controller:8778
查看端口
netstat -tnulp |grep 8778
七、计算服务Nova安装
1.基本的概念
Nova负责管理OpenStack中云主机实例(虚拟机)的创建、删除、启动、停止等。Nova位于OpenStack架构的中心,其他服务或者组件(如Glance、Placement、Cinder、Neutron等)对它提供支持。
2.组件架构
- nova-api :该模块用于接收和响应外部请求,也是外部可用于管理Nova的唯一入口
- nova-scheduler:负责从计算机集群中选择某一主机创建虚拟机
- nova-compute:nova的核心模块,负责创建虚拟机
- nova-conductor:负责将数据写进数据库
3.基本工作流程
- 第1步,nova-api接收到用户通过管理界面或命令行发起的云主机创建请求,并将其发送到消息队列中。
- 第2步,nova-conductor从消息队列获得请求,从数据库获得如Cell单元的相关信息,再将请求和获得的数据放入消息队列。
- 第3步,nova-scheduler从消息队列获得请求和数据以后,与Placement组件配合选择创建云主机的物理机,选择完成后,请求转入消息队列等待nova-compute处理。
- 第4步,nova-compute从消息队列获得请求后,分别与Glance、Neutron和Cinder交互以获取镜像资源、网络资源和云存储资源。一切资源准备就绪后,nova-compute通过Hypervisor调用具体的虚拟化程序,如KVM、QEMU、Xen等,来创建虚拟机。
4.控制节点安装nova
4.1 安装软件包
yum -y install openstack-nova-api openstack-nova-conductor openstack-nova-scheduler openstack-nova-novncproxy
在控制节点共安装了Nova的4个软件包,它们分别如下。
- “openstack-nova-api”:Nova与外部的接口模块。
- “openstack-nova-conductor”:Nova传导服务模块,提供数据库访问。
- “nova-scheduler”:Nova调度服务模块,用以选择某台主机进行云主机创建。
- “openstack-nova-novncproxy”:Nova的虚拟网络控制台( Virtual Network Console,VNC)代理模块,支持用户通过VNC访问云主机。
检查是否安装成功
rpm -q openstack-nova-api openstack-nova-conductor openstack-nova-scheduler openstack-nova-novncproxy
查看用户
cat /etc/passwd|grep nova
cat /etc/group|grep nova
4.2 创建数据库
创建
mysql -uroot -p123456
create database nova;
create database nova_api;
create database nova_cell0;
授权登录
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY '123456';
exit;
4.3 修改配置文件
备份配置文件
cp /etc/nova/nova.conf /etc/nova/nova.bak
去除多余注释
grep -Ev '^$|#' /etc/nova/nova.bak >/etc/nova/nova.conf
修改配置文件
vi /etc/nova/nova.conf
修改以下配置
[api_database]
connection = mysql+pymysql://nova:123456@controller/nova_api
[database]
connection = mysql+pymysql://nova:123456@controller/nova
[api]
auth_strategy = keystone
[keystone_authtoken]
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
[placement]
auth_url = http://controller:5000
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = project
username = placement
password = 123456
region_name = RegionOne
[glance]
api_servers = http://controller:9292
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:123456@controller:5672
my_ip = 192.168.32.140
use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[vnc]
enabled = true
server_listen = $my_ip
server_proxyclient_address = $my_ip
[DEFAULT]
的my_ip
参数为控制节点的仅主机ip我修改的如下
[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:123456@controller:5672
my_ip = 192.168.32.140
use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[api]
auth_strategy = keystone
[api_database]
connection = mysql+pymysql://nova:123456@controller/nova_api
[barbican]
[barbican_service_user]
[cache]
[cinder]
[compute]
[conductor]
[console]
[consoleauth]
[cors]
[cyborg]
[database]
connection = mysql+pymysql://nova:123456@controller/nova
[devices]
[ephemeral_storage_encryption]
[filter_scheduler]
[glance]
api_servers = http://controller:9292
[guestfs]
[healthcheck]
[hyperv]
[image_cache]
[ironic]
[key_manager]
[keystone]
[keystone_authtoken]
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
[libvirt]
[metrics]
[mks]
[neutron]
[notifications]
[os_vif_linux_bridge]
[os_vif_ovs]
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_middleware]
[oslo_policy]
[oslo_reports]
[pci]
[placement]
auth_url = http://controller:5000
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = project
username = placement
password = 123456
region_name = RegionOne
[privsep]
[profiler]
[quota]
[rdp]
[remote_debug]
[scheduler]
[serial_console]
[service_user]
[spice]
[upgrade_levels]
[vault]
[vendordata_dynamic_auth]
[vmware]
[vnc]
enabled = true
server_listen = $my_ip
server_proxyclient_address = $my_ip
[workarounds]
[wsgi]
[zvm]
4.4 数据库同步
su nova -s /bin/sh -c "nova-manage api_db sync"
su nova -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1"
su nova -s /bin/sh -c "nova-manage cell_v2 map_cell0"
su nova -s /bin/sh -c "nova-manage db sync"
su nova -s /bin/sh -c "nova-manage db sync"
如果出错,可以登录数据库执行神秘小代码
mysql -u root -p123456
use nova;
ALTER TABLE block_device_mapping DROP INDEX block_device_mapping_instance_uuid_virtual_name_device_name_idx;
ALTER TABLE instances DROP INDEX uniq_instances0uuid;
exit;
检测
nova-manage cell_v2 list_cells
4.5 nova组件初始化
创建nova用户
openstack user create --domain default --password 123456 nova
source admin_login
分配admin角色
openstack role add --project project --user nova admin
创建compute服务
openstack service create --name nova compute
创建端点
openstack endpoint create --region RegionOne nova public http://controller:8774/v2.1
openstack endpoint create --region RegionOne nova internal http://controller:8774/v2.1
openstack endpoint create --region RegionOne nova admin http://controller:8774/v2.1
检查
openstack endpoint list |grep nova
4.6 启动组件
设置开机启动
systemctl enable openstack-nova-api openstack-nova-scheduler openstack-nova-conductor openstack-nova-novncproxy
启动
systemctl start openstack-nova-api openstack-nova-scheduler openstack-nova-conductor openstack-nova-novncproxy
查看状态
systemctl status openstack-nova-api openstack-nova-scheduler openstack-nova-conductor openstack-nova-novncproxy
4.7查询服务端口
端口检测
netstat -nutpl|grep 877
服务检测
openstack compute service list
5.计算节点上nova安装
在计算节点只需要安装Nova的计算模块“nova-compute”,按照以下方法进行安装。
5.1 安装软件包
yum -y install openstack-nova-compute
查看用户
cat /etc/passwd |grep nova
cat /etc/group |grep nova
5.2 修改配置
备份
cp /etc/nova/nova.conf /etc/nova/nova.bak
去除注释
grep -Ev '^$|#' /etc/nova/nova.bak >/etc/nova/nova.conf
修改配置文件
vi /etc/nova/nova.conf
添加以下配置:
[placement]
auth_url = http://controller:5000
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = project
username = placement
password = 123456
region_name = RegionOne
[api]
auth_strategy = keystone
[keystone_authtoken]
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = project
username = nova
password = 123456
[glance]
api_servers = http://controller:9292
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:123456@controller:5672
my_ip = 192.168.32.150 # 计算节点的仅主机ip
use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[vnc]
enabled = true
server_listen = 0.0.0.0
server_proxyclient_address = $my_ip
novncproxy_base_url = http://192.168.130.140:6080/vnc_auto.html #控制节点的ip
[libvirt]
virt_type = qemu
我修改的如下
[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:123456@controller:5672
my_ip = 192.168.32.150
use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[api]
auth_strategy = keystone
[api_database]
[barbican]
[cache]
[cinder]
[compute]
[conductor]
[console]
[consoleauth]
[cors]
[database]
connection = mysql+pymysql://nova:123456@controller/nova
[devices]
[ephemeral_storage_encryption]
[filter_scheduler]
[glance]
api_servers = http://controller:9292
[guestfs]
[healthcheck]
[hyperv]
[ironic]
[key_manager]
[keystone]
[keystone_authtoken]
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = project
username = nova
password = 123456
[libvirt]
virt_type = qemu
[metrics]
[mks]
[neutron]
[notifications]
[osapi_v21]
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_middleware]
[oslo_policy]
[pci]
[placement]
auth_url = http://controller:5000
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = project
username = placement
password = 123456
region_name = RegionOne
[powervm]
[privsep]
[profiler]
[quota]
[rdp]
[remote_debug]
[scheduler]
discover_hosts_in_cells_interval = 60
[serial_console]
[service_user]
[spice]
[upgrade_levels]
[vault]
[vendordata_dynamic_auth]
[vmware]
[vnc]
enabled = true
server_listen = 0.0.0.0
server_proxyclient_address = $my_ip
novncproxy_base_url = http://192.168.130.140:6080/vnc_auto.html
[workarounds]
[wsgi]
[xenserver]
[xvp]
[zvm]
测试计算节点服务
systemctl enable libvirtd openstack-nova-compute && systemctl start libvirtd openstack-nova-compute && systemctl status libvirtd openstack-nova-compute
5.3 发现计算节点
在
vi admin_login
写入以下内容
export OS_USERNAME=admin
export OS_PASSWORD=123456
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
在
su nova -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose"
在配置文件中设置定时发现功能
vi /etc/nova/nova.conf
重启服务
systemctl restart openstack-nova-api && systemctl status openstack-nova-api
5.4 查询组件状态
openstack compute service list
openstack catalog list
nova-status upgrade check
八、网络服务Neutron安装
1.基本概念
- 网桥(Bridge):网桥类似于交换机,用于连接不同的网络设备。Neutron把网桥分为内部网桥(bridge- internal,br-int)和外部网桥(bridge-external,ex-int)两类。内部网桥即实现内部网络功能的网桥。外部网桥即负责跟外部网络通信的网桥。
- 网络(Network):一个隔离的二层网段,类似于一个虚拟局域网。
- 子网(Subnet):子网必须关联一个网络,分配一个ip段,再分配给虚拟机。
- 端口(Port):端口可以看作虚拟交换机上的一个端口,可以简单理解为虚拟机的网卡。
2.Neutron的组件架构
- neutron-server:Neutron的服务模块,对外提供OpenStack网络API,接收请求,并调用插件处理请求。
- neutron-plugin:与数据库交互,持久化存储或者更新当前系统的网络信息,包含虚拟机的网络、子网、port等。
- neutron-agent:负责真正的去调用provider创建网络、子网等网络资源。
- 核心插件(Core-plugin):ML2中主要包括网络、子网、端口这3类核心资源。
- 服务插件(Service-pluging):服务插件是除核心插件以外其他的插件的统称。
3.Neutron的基本工作流程
4.Neutron支持的网络模式
- Flat网络模式:所有虚拟机的网卡IP地址和物理机的网卡IP地址同处一个网段中。
- VLAN网络模式:VLAN网络模块是指将若干云主机按逻辑划分属于不同的VLAN,此时只有同一个VLAN中的虚拟机可以相互访问。一般用于私有云。
- VXLAN与GRE网络模式:VXLAN与GRE支持更多的网段。VXLAN与GRE可以支持的网段数超过1600万,因此它们更适合用于公有云。
5.网卡设置混杂模式
在
查询网卡名称
ip a
设置网卡(ens32改为你的nat网卡名称)
ifconfig ens32 promisc
设置开机启动
echo "ifconfig ens32 promisc" >> /etc/profile
6.加载桥接模式防火墙模块
以下在
添加配置
echo "net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1" >> /etc/sysctl.conf
加载配置
modprobe br_netfilter
测试
sysctl -p
7.控制节点安装软件包
7.1在控制节点安装
yum -y install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge
- “openstack-neutron”:“neutron-server”模块的包。
- “openstack-neutron-ml2”:ML2插件的包。
- “openstack-neutron-linuxbridge”:网桥和网络提供者相关的软件包
测试
cat /etc/passwd |grep neutron
cat /etc/group |grep neutron
7.2数据库操作
创建数据库
mysql -uroot -p123456
create database neutron;
授权
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY '123456';
7.3修改配置文件
备份
cp /etc/neutron/neutron.conf /etc/neutron/neutron.bak
去除无效字符
grep -Ev '^$|#' /etc/neutron/neutron.bak>/etc/neutron/neutron.conf
修改neutron配置
vi /etc/neutron/neutron.conf
[DEFAULT]
core_plugin = ml2
service_plugins =
transport_url = rabbit://openstack:123456@controller
auth_strategy = keystone
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
[database]
connection = mysql+pymysql://neutron:123456@controller/neutron
[keystone_authtoken]
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = project
username = neutron
password = 123456
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[nova]
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = project
username = nova
password = 123456
region_name = RegionOne
server_proxyclient_address = 192.168.32.140 #控制节点仅主机ip
我修改的如下
[DEFAULT]
core_plugin = ml2
service_plugins =
transport_url = rabbit://openstack:123456@controller
auth_strategy = keystone
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
[cors]
[database]
connection = mysql+pymysql://neutron:123456@controller/neutron
[keystone_authtoken]
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = project
username = neutron
password = 123456
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_middleware]
[oslo_policy]
[privsep]
[ssl]
[nova]
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = project
username = nova
password = 123456
region_name = RegionOne
server_proxyclient_address = 192.168.32.140
备份
cp /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugins/ml2/ml2_conf.bak
修改配置
echo "[DEFAULT]
[ml2]
type_drivers = flat
tenant_network_types =
mechanism_drivers = linuxbridge
extension_drivers = port_security
[ml2_type_flat]
flat_networks = provider
[securitygroup]
enable_ipset = true" > /etc/neutron/plugins/ml2/ml2_conf.ini
映射文件
ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
备份
cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini /etc/neutron/plugins/ml2/linuxbridge_agent.bak
修改配置(ens33代表外网nat网卡名称)
echo "[DEFAULT]
[linux_bridge]
physical_interface_mappings = provider:ens33
[vxlan]
enable_vxlan = false
[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver" > /etc/neutron/plugins/ml2/linuxbridge_agent.ini
备份
cp /etc/neutron/dhcp_agent.ini /etc/neutron/dhcp_agent.bak
修改配置
echo "[DEFAULT]
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true" > /etc/neutron/dhcp_agent.ini
备份
cp /etc/neutron/metadata_agent.ini /etc/neutron/metadata_agent.bak
修改配置(controller为控制节点名称)
echo "[DEFAULT]
nova_metadata_host = controller
metadata_proxy_shared_secret = METADATA_SECRET
[cache]" > /etc/neutron/metadata_agent.ini
vi /etc/nova/nova.conf
找到[neutron]添加如下配置
[neutron]
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = project
username = neutron
password = 123456
service_metadata_proxy = true
metadata_proxy_shared_secret = METADATA_SECRET
7.4数据库同步
su neutron -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade heads"
检查
mysql -uneutron -p123456
use neutron;
show tables;
7.5neutron组件初始化
创建neutron角色
openstack user create --domain default --password 123456 neutron
分配admin权限
openstack role add --project project --user neutron admin
创建服务
openstack service create --name neutron network
创建内、外、admin服务端点
openstack endpoint create --region RegionOne neutron public http://controller:9696
openstack endpoint create --region RegionOne neutron internal http://controller:9696
openstack endpoint create --region RegionOne neutron admin http://controller:9696
重启测试
systemctl restart openstack-nova-api
systemctl status openstack-nova-api
systemctl enable neutron-server neutron-linuxbridge-agent neutron-dhcp-agent neutron-metadata-agent
systemctl start neutron-server neutron-linuxbridge-agent neutron-dhcp-agent neutron-metadata-agent
systemctl status neutron-server neutron-linuxbridge-agent neutron-dhcp-agent neutron-metadata-agent
检测端口
netstat -tnlup |grep 9696
curl http://controller:9696
8.计算节点安装neutron
8.1安装软件包
yum -y install openstack-neutron-linuxbridge.noarch
查询用户
cat /etc/passwd |grep neutron
cat /etc/group |grep neutron
8.2备份修改配置文件
备份
cp /etc/neutron/neutron.conf /etc/neutron/neutron.bak
去除注释字符
grep -Ev '^$|#' /etc/neutron/neutron.bak>/etc/neutron/neutron.conf
修改neutron配置
vi /etc/neutron/neutron.conf
添加以下配置
[DEFAULT]
transport_url = rabbit://openstack:123456@controller:5672 # 123456:openstack用户密码
auth_strategy = keystone
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
[keystone_authtoken]
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = project
username = neutron
password = 123456
我修改的如下
[DEFAULT]
transport_url = rabbit://openstack:123456@controller:5672
auth_strategy = keystone
[cors]
[database]
[keystone_authtoken]
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = project
username = neutron
password = 123456
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_middleware]
[oslo_policy]
[privsep]
[ssl]
备份
cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini /etc/neutron/plugins/ml2/linuxbridge_agent.bak
修改配置(ens32为物理机的外网(nat)网卡)
echo "[DEFAULT]
[linux_bridge]
physical_interface_mappings = provider:ens32
[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver" > /etc/neutron/plugins/ml2/linuxbridge_agent.ini
vi /etc/nova/nova.conf
增加配置
[DEFAULT]
vif_plugging_is_fatal = false
vif_plugging_timeout = 0
[neutron]
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = project
username = neutron
password = 123456
我修改的最终如下
[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:123456@controller:5672
my_ip = 192.168.130.150
use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver
vif_plugging_is_fatal = false
vif_plugging_timeout = 0
[api]
auth_strategy = keystone
[api_database]
[barbican]
[cache]
[cinder]
[compute]
[conductor]
[console]
[consoleauth]
[cors]
[database]
connection = mysql+pymysql://nova:123456@controller/nova
[devices]
[ephemeral_storage_encryption]
[filter_scheduler]
[glance]
api_servers = http://controller:9292
[guestfs]
[healthcheck]
[hyperv]
[ironic]
[key_manager]
[keystone]
[keystone_authtoken]
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = project
username = nova
password = 123456
[libvirt]
virt_type = qemu
[metrics]
[mks]
[neutron]
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = project
username = neutron
password = 123456
[notifications]
[osapi_v21]
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_middleware]
[oslo_policy]
[pci]
[placement]
auth_url = http://controller:5000
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = project
username = placement
password = 123456
region_name = RegionOne
[powervm]
[privsep]
[profiler]
[quota]
[rdp]
[remote_debug]
[scheduler]
discover_hosts_in_cells_interval = 60
[serial_console]
[service_user]
[spice]
[upgrade_levels]
[vault]
[vendordata_dynamic_auth]
[vmware]
[vnc]
enabled = true
server_listen = 0.0.0.0
server_proxyclient_address = $my_ip
novncproxy_base_url = http://192.168.130.140:6080/vnc_auto.html
[workarounds]
[wsgi]
[xenserver]
[xvp]
[zvm]
8.3启动服务
重启nova-compute服务
systemctl restart openstack-nova-compute
systemctl status openstack-nova-compute
重启neutron-linux-agent服务
systemctl enable neutron-linuxbridge-agent
systemctl start neutron-linuxbridge-agent
systemctl status neutron-linuxbridge-agent
9.检测neutron服务
openstack network agent list
neutron-status upgrade check
九、仪表盘Dashboard安装
Dashboard为OpenStack提供了一个Web前端的管理界面.
本项目将在
Dashboard的主要功能是让用户通过网页上的操作完成对云计算平台的配置与管理.
1.基本工作流程
2.安装包
yum -y install openstack-dashboard
3.修改配置
vi /etc/openstack-dashboard/local_settings
修改以下配置
ALLOWED_HOSTS = ['*']
OPENSTACK_HOST = "controller"
TIME_ZONE = "Asia/Shanghai"
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': 'controller:11211',
}
}
新增
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
OPENSTACK_API_VERSIONS = {
"identity": 3,
"image": 2,
"volume": 3,
}
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
修改
OPENSTACK_NEUTRON_NETWORK = {
'enable_auto_allocated_network': False,
'enable_distributed_router': False,
'enable_fip_topology_check': False,
'enable_ha_router': False,
'enable_ipv6': False,
# TODO(amotoki): Drop OPENSTACK_NEUTRON_NETWORK completely from here.
# enable_quotas has the different default value here.
'enable_quotas': False,
'enable_rbac_policy': False,
'enable_router': False,
'default_dns_nameservers': [],
'supported_provider_types': ['*'],
'segmentation_id_range': {},
'extra_provider_types': {},
'supported_vnic_types': ['*'],
'physical_networks': [],
}
curl https://cdn.b52m.cn/static/openstack-dashboard_local_settings.php?hostname=controller > /etc/openstack-dashboard/local_settings
4.发布
cd /usr/share/openstack-dashboard
python manage.py make_web_conf --apache > /etc/httpd/conf.d/openstack-dashboard.conf
建立软连接
ln -s /etc/openstack-dashboard /usr/share/openstack-dashboard/openstack_dashboard/conf
重启httpd服务
systemctl enable httpd
systemctl start httpd
systemctl status httpd
5.测试
在本地计算机浏览器的地址栏中输入计算节点的IP地址“http://192.168.130.150”
【域】文本框中填入域名“Default”,
【用户名】文本框中填入“admin”,
【密码】文本框中填入“123456”(也就是Keystone设置的密码)
点击登录即可进入界面
十、块存储服务cinder
1.基本概念
Cinder是OpenStack中提供块存储服务的组件,主要功能是为虚拟机实例提供虚拟磁盘管理服务。
- 文件存储:文件存储依靠文件系统来存储文件。文件直接存储在文件系统中,通过FTP、网络文件系统(Network File System,NFS)等服务进行访问。
- 块存储:块存储中的块顾名思义就是存储系统采用的一整块存储设备。块存储可以虚拟出整块硬盘给云主机使用。
- 对象存储:对象数据可以分为两个部分,一部分是数据,存储于对象存储服务器中;另一部分是对应的元数据,存储于元数据服务器中。
2.Cinder的组件架构
- Cinder-api:模块入口,接收请求参数。
- Cinder-volume:管理所有的卷信息,包括增删改在。
- Volume-provider:真正去调用创建卷的组件。
- Volume-scheduler:选择创建卷到哪个存储服务器上。
- Volume-backup:备份。
3.Cinder的基本工作流程
- Cinder各个模块分工合作的大致流程如下(注意模块之间的通信都是通过消息队列传递的)。
- 第1步,“cinder-api”接收到用户通过管理界面或命令行发起的卷创建请求后,完成必要处理后将其发送到消息队列中。
- 第2步,“cinder-scheduler”从消息队列获得请求和数据以后,从若干存储节点中选出一个能存放该卷的节点。然后又将信息发送到消息队列。
- 第3步,“cinder-volume”从消息队列获取请求后,通过“volume-provider”调用具体的卷管理系统在存储设备上创建卷。
4.在控制节点安装
4.1安装包
yum -y install openstack-cinder
查看用户
cat /etc/passwd |grep cinder
cat /etc/group |grep cinder
4.2数据库授权
mysql -uroot -p123456
create database cinder;
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY '123456';
4.3修改配置
备份
cp /etc/cinder/cinder.conf /etc/cinder/cinder.bak
去除注释
grep -Ev '^$|#' /etc/cinder/cinder.bak > /etc/cinder/cinder.conf
修改配置
vi /etc/cinder/cinder.conf
修改如下配置
[DEFAULT]
auth_strategy = keystone
transport_url = rabbit://openstack:123456@controller:5672
[keystone_authtoken]
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = project
username = cinder
password = 123456
[database]
connection = mysql+pymysql://cinder:123456@controller/cinder
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
我修改的如下
[DEFAULT]
auth_strategy = keystone
transport_url = rabbit://openstack:123456@controller:5672
[backend]
[backend_defaults]
[barbican]
[brcd_fabric_example]
[cisco_fabric_example]
[coordination]
[cors]
[database]
connection = mysql+pymysql://cinder:123456@controller/cinder
[fc-zone-manager]
[healthcheck]
[key_manager]
[keystone_authtoken]
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = project
username = cinder
password = 123456
[nova]
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_middleware]
[oslo_policy]
[oslo_reports]
[oslo_versionedobjects]
[privsep]
[profiler]
[sample_castellan_source]
[sample_remote_file_source]
[service_user]
[ssl]
[vault]
vi /etc/nova/nova.conf
添加如下配置
[cinder]
os_region_name = RegionOne
4.4同步数据库
su cinder -s /bin/sh -c "cinder-manage db sync"
mysql -uroot -p123456
use cinder;
show tables;
4.5创建用户角色
创建cinder用户
openstack user create --domain default --password 123456 cinder
授权admin
openstack role add --project project --user cinder admin
创建服务
openstack service create --name cinderv3 volumev3
创建服务端点
openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\(project_id\)s
openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\(project_id\)s
openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\(project_id\)s
4.6重启服务
systemctl enable openstack-cinder-api openstack-cinder-scheduler
systemctl start openstack-cinder-api openstack-cinder-scheduler
systemctl status openstack-cinder-api openstack-cinder-scheduler
5.在计算节点安装
5.1计算节点添加硬盘
创建完之后
5.2搭建存储节点——创建卷组
查看系统硬盘挂载情况
lsblk
创建LVM物理卷组(sdb为你新加的盘名)
pvcreate /dev/sdb
将物理卷归并为卷组
vgcreate cinder-volumes /dev/sdb
5.3修改lvm配置
vi /etc/lvm/lvm.conf
在devices参数下添加以下字段:
filter = [ "a/sdb/","r/.*/"]
代码中“a”表示接受,“r”表示拒绝
5.4重启lvm服务
systemctl enable lvm2-lvmetad
systemctl start lvm2-lvmetad
systemctl status lvm2-lvmetad
5.5安装cinder包
yum -y install openstack-cinder targetcli python-keystone
5.6修改cinder配置
备份
cp /etc/cinder/cinder.conf /etc/cinder/cinder.bak
去除注释
grep -Ev '^$|#' /etc/cinder/cinder.bak > /etc/cinder/cinder.conf
修改配置
vi /etc/cinder/cinder.conf
添加如下内容
[DEFAULT]
auth_strategy = keystone
enabled_backends = lvm
transport_url = rabbit://openstack:123456@controller:5672
glance_api_servers = http://controller:9292
[keystone_authtoken]
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = project
username = cinder
password = 123456
[database]
connection = mysql+pymysql://cinder:123456@controller/cinder
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
target_protocol = iscsi
target_helper = lioadm
我修改的如下
[DEFAULT]
auth_strategy = keystone
enabled_backends = lvm
transport_url = rabbit://openstack:123456@controller:5672
glance_api_servers = http://controller:9292
[backend]
[backend_defaults]
[barbican]
[brcd_fabric_example]
[cisco_fabric_example]
[coordination]
[cors]
[database]
connection = mysql+pymysql://cinder:123456@controller/cinder
[fc-zone-manager]
[healthcheck]
[key_manager]
[keystone_authtoken]
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = project
username = cinder
password = 123456
[nova]
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_middleware]
[oslo_policy]
[oslo_reports]
[oslo_versionedobjects]
[privsep]
[profiler]
[sample_castellan_source]
[sample_remote_file_source]
[service_user]
[ssl]
[vault]
[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
target_protocol = iscsi
target_helper = lioadm
5.7重启cinder服务
systemctl enable openstack-cinder-volume target
systemctl start openstack-cinder-volume target
systemctl status openstack-cinder-volume target
6.检测
openstack volume service list
创建8G的卷
openstack volume create --size 8 volume1
十一、虚拟网络管理(教程,可不做)
1.网络管理
主要包含网络的增删改查操作。
openstack network <操作> [选项] [<网络名>]
1.1 创建网络
创建一个外部共享的网络
openstack network create --share --external --provider-physical-network provider --provider-network-type flat 230303050-zhangsan
1.2 查看网络列表
openstack network list
1.3 查看网络信息(ID根据上一步获取)
openstack network show 9a8b69e6-d689-4cb1-b900-23c6cd971b5d
1.4 修改网络信息
openstack network set --name 2022-lisi 9a8b69e6-d689-4cb1-b900-23c6cd971b5d
openstack network list
1.5 删除网络信息
openstack network delete 9a8b69e6-d689-4cb1-b900-23c6cd971b5d
openstack network list
2.子网管理
子网(Subnet)是挂载在网络上的一个IP地址段,它的主要功能是当网络上创建新的端口时为其分配IP地址。子网与网络是多对一的关系,一个子网必须属于一个网络,而一个网络下可以有多个子网。
2.1创建子网
子网需要提前创建网络
openstack subnet create --network 230303050-zhangsan --allocation-pool start=192.168.130.100,end=192.168.130.250 --dns-nameserver 114.114.114.114 --subnet-range 192.168.130.0/24 230303050-zhangsan-subnet
其中:
–network 网络名称
–allocation-pool 子网段范围
–dns-nameserver dns解析
–subnet-range 子网掩码
230303050-zhangsan-subnet 子网的名称
2.2 查看子网列表
openstack subnet list
2.3 查看子网信息
openstack subnet show 230303050-zhangsan-subnet
2.4 修改子网
openstack subnet set --name 230303050-lisi-subnet 230303050-zhangsan-subnet
openstack subnet list
2.5 删除子网
openstack subnet delete 230303050-lisi-subnet
openstack subnet list
3.端口管理
端口(Port)是挂载在子网上的用于连接云主机虚拟网卡的接口。端口上定义了硬件物理地址(MAC地址)和独立的IP地址,当云主机的虚拟网卡连接到某个端口时,端口就会将 MAC地址和IP地址分配给虚拟网卡。
3.1 创建端口
openstack port create zhangsan-port --network 230303050-zhangsan --fixed-ip subnet=230303050-zhangsan-subnet,ip-address=192.168.130.110
其中:
zhangsan-port 端口的名称
230303050-zhangsan 网络的名称
230303050-zhangsan-subnet, 子网的名称
192.168.130.110 子网的ip,即port的ip
3.2 查看端口列表
openstack port list
3.3 查看端口信息
openstack port show zhangsan-port
3.4 修改端口
openstack port set --name lisi-port zhangsan-port
openstack port list
3.5 删除端口
openstack port delete lisi-port
openstack port list
4.网桥管理
下载网桥命令行包
yum install -y bridge-utils
添加网桥
brctl addbr br1
网桥绑定网卡
brctl addif br1 ens33
查询网桥信息
brctl show br1
解除绑定
brctl delif br1 ens33
删除网桥
brctl delbr br1
brctl show
5.实例管理
实例类型(Flavor)类似于云主机的虚拟硬件
通常实例类型的管理包括创建、删除、查询等。可用下面的命令对OpenStack的实例类型进行管理。
openstack flavor <操作> [选项] <实例类型名>
5.1 创建模板
openstack flavor create --id auto --ram 512 --disk 10 --vcpus 1 --public my-flavor
5.2 查看模板列表
openstack flavor list
5.3 查看模板信息
openstack flavor show my-flavor
5.4 修改模板
openstack flavor set --property time=2024 my-flavor
openstack flavor show my-flavor
5.5 删除模板
openstack flavor delete my-flavor
openstack flavor list
6.云主机管理
云主机管理是OpenStack云计算平台的核心功能,通常云主机的管理包括创建、删除、查询等。可用下面的命令对OpenStack的云主机进行管理。
openstack server <操作> <云主机名> [选项]
6.1 创建云主机
openstack server create vm001 --image cirros --flavor my-flavor --network 230303050-zhangsan
其中:
vm001 虚拟机名称
cirros 镜像名称
my-flavor 规格名称
230303050-zhangsan 网络名称
6.2 查看云主机列表
openstack server list
6.3 查看云主机
openstack server show vm001
6.4 停止云主机
openstack server stop vm001
暂无评论内容