部署Openstack HA
一、技术介绍
Heartbeat 与Corosync 是流行的Messaging Layer (集群信息层),Pacemaker 是最流行的CRM(集群资源管理器),同时Corosync+Pacemaker 是最流行的高可用集群的套件,使用DRBD+Pacemaker+Corosync 部署OpenStack HA。
二、安装前准备
1、常规初始化操作
两个个节点都需要执行
hostnamectl set-hostname controller01
yum -y install vim lrzsz net-tools
cat >>/etc/hosts<<EOF
192.168.180.190 controller01
192.168.180.180 controller02
192.168.180.200 controller
EOF
systemctl stop firewalld.service && systemctl disable firewalld.service
sed -i ‘/^SELINUX=/s/enforcing/disabled/’ /etc/selinux/config && setenforce 0
2、配置时间同步
controller01:
yum install chrony -y
vim /etc/chrony.conf
server ntp6.aliyun.com iburst
allow 192.168.0.0/16
systemctl enable chronyd.service && systemctl restart chronyd.service
chronyc sources && chronyc -a makestep
controller02:
yum install chrony -y
vim /etc/chrony.conf
server controller01 iburst
systemctl enable chronyd.service && systemctl restart chronyd.service && chronyc sources
三、安装配置DRBD
1、安装DRBD
两个节点都要操作
rpm -ivh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm
yum install -y drbd84-utils kmod-drbd84 kernel*
重启系统
reboot
加载模块
modprobe drbd
echo drbd >/etc/modules-load.d/drbd.conf
2、配置DRBD
在controller01 上
vim /etc/drbd.conf
include “drbd.d/global_common.conf”;
include “drbd.d/*.res”;
cp /etc/drbd.d/global_common.conf{,.bak}
vim /etc/drbd.d/global_common.conf //替换为如下内容
global {
usage-count no;
udev-always-use-vnr; # treat implicit the same as explicit volumes
}
common {
protocol C;
handlers {
pri-on-incon-degr “/usr/lib/drbd/notify-pri-on-incon-degr.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f”;
pri-lost-after-sb “/usr/lib/drbd/notify-pri-lost-after-sb.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f”;
local-io-error “/usr/lib/drbd/notify-io-error.sh; /usr/lib/drbd/notify-emergency-shutdown.sh; echo o > /proc/sysrq-trigger ; halt -f”;
}
startup {
}
options {
}
disk {
on-io-error detach;
}
net {
cram-hmac-alg “sha1”;
shared-secret “123456”;
}
}
vim /etc/drbd.d/mydrbd.res
resource mydrbd {
on controller01 {
device /dev/drbd0;
disk /dev/sdb;
address 192.168.180.190:7789;
meta-disk internal;
}
on controller02 {
device /dev/drbd0;
disk /dev/sdb;
address 192.168.180.180:7789;
meta-disk internal;
}
}
将配置好的文件复制到controller02上
scp /etc/drbd.conf controller02:/etc/
scp /etc/drbd.d/{global_common.conf,mydrbd.res} controller02:/etc/drbd.d
给虚拟机添加硬盘,两个节点都要执行,然后重启系统
创建初始化DRBD 设备元数据并创建元数据,两个节点都要执行
dd if=/dev/zero of=/dev/sdb bs=1M count=100
drbdadm create-md mydrbd
drbdadm up mydrbd
将controller01 节点设置为主节点
drbdadm – --overwrite-data-of-peer primary mydrbd
cat /proc/drbd //查看DBRD 状态
在controller01上执行
mke2fs -j /dev/drbd0
四、Corosync 安装和配置
两台机器上都执行
- 安装Pacemaker、Corosync
yum install -y pacemaker pcs psmisc policycoreutils-python
systemctl start pcsd.service && systemctl enable pcsd.service
给hacluster用户设置密码为:123456
passwd hacluster
pcs cluster auth controller01 controller02 //在controller01 授权集群节点
pcs cluster setup --name openstack-HA controller01 controller02 //在controller01 设置集群名称,加入节点
pcs cluster start --all && pcs status corosync \启动并查看状态
2、配置Corosync
vim /etc/corosync/corosync.conf
totem {
version: 2
cluster_name: openstack-HA
secauth: off
transport: udpu
}
nodelist {
node {
ring0_addr: controller01
nodeid: 1
}
node {
ring0_addr: controller02
nodeid: 2
}
}
quorum {
provider: corosync_votequorum
two_node: 1
}
logging {
to_logfile: yes
logfile: /var/log/cluster/corosync.log
to_syslog: yes
}
corosync-keygen
cd /etc/corosync/
scp -p authkey corosync.conf controller02:/etc/corosync/
3、Pacemaker 配置 - 配置集群初始属性
pcs cluster status
pcs property set no-quorum-policy=ignore
pcs resource defaults migration-threshold=1
pcs property set stonith-enabled=false
在故障controller 恢复后,为防止备用资源迁回原有节点(迁来迁去会对业务有一定影响),建议将以下数值设置为官网推荐的默认时间。
pcs resource defaults resource-stickiness=100 && pcs resource defaults
pcs resource op defaults timeout=90s && pcs resource op defaults
pcs property set pe-warn-series-max=1000 pe-input-series-max=1000 pe-error-series-max=1000 cluster-recheck-interval=5min
crm_verify -L -V
验证如果默认没有任何输出,就说明配置正确 - 配置集群详细属性
执行以下命令配置VIP 和监测时间间隔,主节点上配置
pcs resource create vip ocf💓IPaddr2 ip=192.168.180.200 cidr_netmask=24 op monitor interval=30s
查看群集情况
pcs property
五、MariaDB 安装和配置
MariaDB 安装和配置在两个节点都要执行 - 安装MariaDB
yum -y install mariadb mariadb-server python2-PyMySQL - 配置MariaDB
vim /etc/my.cnf.d/openstack.cnf
[mysqld]
bind-address = 192.168.180.190
default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8
在controller02 上编辑/etc/my.cnf.d/openstack.cnf,只需要将bind-address =192.168.180.190改为192.168.180.180,其他配置和192.168.180.190上保持一致。
在两个节点上,分别启动数据库服务,并配置为开机启动。
systemctl enable mariadb.service && systemctl start mariadb.service
mysql_secure_installation (密码设置为123456)
登录测试
mysql -u root -p123456
六、Memcache 的安装配置
安装Memcached 服务,两个节点都需要执行
yum install memcached python-memcached -y
vim /etc/sysconfig/memcached
PORT=“11211”
USER=“memcached”
MAXCONN=“1024”
CACHESIZE=“64”
OPTIONS=“-l 192.168.180.190,::1” //节点IP
systemctl restart memcached.service && systemctl enable memcached.service
七、RabbitMQ 安装和配置 - 安装RabbitMQ
两个节点上,分别完成RabbitMQ 安装和配置
yum install centos-release-openstack-train -y
yum install rabbitmq-server -y
systemctl enable rabbitmq-server.service && systemctl start rabbitmq-server.service - 配置RabbitMQ
使用rabbitmqctl 添加openstack 用户,并设置密码为admin
rabbitmqctl add_user openstack admin
给openstack 用户授予权限
rabbitmqctl set_permissions openstack “." ".” “.*”
RabbitMQ 自带了web 管理界面,只需要启动插件便可以使用。
rabbitmq-plugins enable rabbitmq_management
登录http://192.168.180.190:15672/ ,用户名(guest)、密码(guest)
八、安装配置Keystone
在两个节点分别执行以下操作 - 安装Keystone
mysql -u root -p123456 -e “CREATE DATABASE keystone;”
mysql -u root -p123456 -e “GRANT ALL PRIVILEGES ON keystone.* TO ‘keystone’@‘controller01’ IDENTIFIED BY ‘admin’;”
mysql -u root -p123456 -e “GRANT ALL PRIVILEGES ON keystone.* TO ‘keystone’@‘%’ IDENTIFIED BY ‘admin’;”
yum -y install openstack-keystone python-openstackclient httpd mod_wsgi - 配置keystone
在两个节点分别执行以下操作
vim /etc/keystone/keystone.conf
[database]
connection = mysql+pymysql://keystone:admin@192.168.180.190/keystone //controller02 内修改为192.168.180.180
[token]
provider = fernet
su -s /bin/sh -c “keystone-manage db_sync” keystone
keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
keystone-manage bootstrap --bootstrap-password admin --bootstrap-admin-url http://controller01:5000/v3/ --bootstrap-internal-url http://controller01:5000/v3/ --bootstrap-public-url http://controller01:5000/v3/ --bootstrap-region-id RegionOne //controller02 节点注意修改命令中主机名
//controller02 节点注意修改配置文件中的主机名
vim /etc/httpd/conf/httpd.conf
ServerName controller01 //controller02 节点注意修改配置文件中的主机名
ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
systemctl enable httpd.service && systemctl start httpd.service
cat >> ~/admin-openrc << EOF
export OS_USERNAME=admin
export OS_PASSWORD=admin
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://controller01:5000/v3 //controller02 节点注意修改配置文件中的主机名
export OS_IDENTITY_API_VERSION=3
EOF
//controller02 节点注意修改配置文件中的主机名
chmod +x admin-openrc && . admin-openrc
env | grep OS
openstack project create --domain default --description “Service Project” service
openstack project create --domain default --description “Demo Project” demo
openstack user create --domain default --password-prompt demo //输入两次密码demo
openstack role create user
openstack role add --project demo --user demo user
执行命令重置OS_TOKEN 和OS_URL 环境变量
unset OS_TOKEN OS_URL
openstack --os-auth-url http://controller01:5000/v3 --os-project-domain-name default --os-user-domain-name default --os-project-name admin --os-username admin token issue
. admin-openrc && openstack token issue
//controller02 节点注意修改配置文件中的主机名
cat >> ~/demo-openrc << EOF
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=demo
export OS_USERNAME=demo
export OS_PASSWORD=demo
export OS_AUTH_URL=http://controller01:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
EOF
//controller02 节点注意修改配置文件中的主机名
chmod +x demo-openrc && . demo-openrc
openstack --os-auth-url http://controller01:5000/v3 --os-project-domain-name default --os-user-domain-name default --os-project-name demo --os-username demo token issue //输入demo 密码
//controller02 节点注意修改配置文件中的主机名
openstack token issue
九、安装及配置Dashboard
在两个节点上分别安装和配置Dashboard
yum -y install openstack-dashboard python-openstackclient
vim /etc/openstack-dashboard/local_settings
OPENSTACK_HOST = “controller01”
ALLOWED_HOSTS = [‘*’]
SESSION_ENGINE = ‘django.contrib.sessions.backends.cache’
CACHES = {
‘default’: {
‘BACKEND’: ‘django.core.cache.backends.memcached.MemcachedCache’,
‘LOCATION’: ‘192.168.180.190:11211’,
}
}
OPENSTACK_KEYSTONE_URL = “http://%s:5000/v3” % OPENSTACK_HOST
//启用第3 版认证API
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
OPENSTACK_API_VERSIONS = {
"identity“: 3,
“image”: 2,
“volume”: 2,
}
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = “default”
OPENSTACK_KEYSTONE_DEFAULT_ROLE = “user”
OPENSTACK_NEUTRON_NETWORK = {
…
‘enable_router’: False,
‘enable_quotas’: False,
‘enable_distributed_router’: False,
‘enable_ha_router’: False,
‘enable_lb’: False,
‘enable_firewall’: False,
‘enable_vpn’: False,
‘enable_fip_topology_check’: False,
}
TIME_ZONE = “Asia/Shanghai”
//两台OpenStack 节点配置相同,需要更换配置文件内的IP 地址。
scp /etc/openstack-dashboard/local_settings 192.168.180.180:/etc/openstack-dashboard/
systemctl restart httpd.service memcached.service
十、验证OpenStack
- 验证集群状态
pcs cluster status - 使用VIP 登录 http://192.168.180.200
在弹出的认证页面分别输入域名为“default”,帐号为“admin”,密码为“admin”。
NOT FOUND
解决方案:
vim /etc/httpd/conf.d/openstack-dashboard.conf
#WSGIScriptAlias /dashboard /usr/share/openstack-dashboard/openstack_dashboard/wsgi/django.wsgi
WSGIScriptAlias / /usr/share/openstack-dashboard/openstack_dashboard/wsgi/django.wsgi
#Alias /dashboard/static /usr/share/openstack-dashboard/static
Alias /static /usr/share/openstack-dashboard/static //去掉了一层dashboard 字符 - 验证HA 切换
pcs cluster stop controller01
pcs cluster status(两个节点上分别查看)
ip a
http://192.168.180.200
本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。 如若内容造成侵权/违法违规/事实不符,请联系我的编程经验分享网邮箱:veading@qq.com进行投诉反馈,一经查实,立即删除!