多节点部署OpenStack企业私有云平台
一、基础环境配置
 1、常规初始化操作
 三个节点都需要执行
 主机名 内存 硬盘
 controller 8G 100G 两块网卡 ens33 192.168.180.210 ens37 192.168.180.220
 compute01 8G 100G
 block01 4G 100G 60G
hostnamectl set-hostname controller && bash
 hostnamectl set-hostname compute01 && bash
 hostnamectl set-hostname block01 && bash
yum -y install vim lrzsz net-tools
cat >> /etc/hosts<<EOF
 192.168.180.210 controller
 192.168.180.200 compute01
 192.168.180.190 block01
 EOF
systemctl stop firewalld.service && systemctl disable firewalld.service
 sed -i ‘/^SELINUX=/s/enforcing/disabled/’ /etc/selinux/config && setenforce 0
 reboot
2、配置时间同步
 controller:
 yum install chrony -y
vim /etc/chrony.conf
 server ntp6.aliyun.com iburst
 allow 192.168.0.0/16
 systemctl enable chronyd.service
 systemctl restart chronyd.service
 chronyc sources
 chronyc -a makestep
compute01/block01:
 yum install chrony -y
vim /etc/chrony.conf
 server controller iburst
 systemctl enable chronyd.service
 systemctl restart chronyd.service
 chronyc sources
3、安装OpenStack源
 controller/compute01:
 yum install -y centos-release-openstack-train
 yum install python-openstackclient openstack-selinux openstack-utils -y
4、MySQL数据库安装配置
 controller:
 yum install mariadb mariadb-server python2-PyMySQL -y
cat > /etc/my.cnf.d/openstack.cnf << EOF
 [mysqld]
 bind-address = 192.168.180.210
 default-storage-engine = innodb
 innodb_file_per_table = on
 max_connections = 4096
 collation-server = utf8_general_ci
 character-set-server = utf8
 EOF
systemctl enable mariadb
 systemctl start mariadb
 mysql_secure_installation
 初始化指定密码为123456
5、RabbitMQ安装及配置
 controller:
 yum install rabbitmq-server -y
 systemctl enable rabbitmq-server
 systemctl start rabbitmq-server
 rabbitmqctl add_user openstack RABBIT_PASS
 rabbitmqctl set_permissions openstack “." ".” “.*”
 rabbitmqctl list_user_permissions openstack
 netstat -anpt | grep -E “25672|5672”
6、Memcached安装配置
 controller:
 yum install memcached python-memcached -y
 修改配置文件
 sed -i “s/OPTIONS=”-l\ 127.0.0.1,::1"/OPTIONS=“-l\ 127.0.0.1,::1,controller”/g" /etc/sysconfig/memcached
systemctl enable memcached.service
 systemctl start memcached.service
 netstat -anpt | grep “11211”
7、etcd安装配置
 controller:
 yum install etcd -y
cp -a /etc/etcd/etcd.conf{,.bak}
 grep -Ev “$|#” /etc/etcd/etcd.conf.bak > /etc/etcd/etcd.conf
 cat > /etc/etcd/etcd.conf << EOF
 ETCD_DATA_DIR=“/var/lib/etcd/default.etcd”
 ETCD_LISTEN_PEER_URLS=“http://192.168.180.210:2380”
 ETCD_LISTEN_CLIENT_URLS=“http://192.168.180.210:2379”
 ETCD_NAME=“controller”
 ETCD_INITIAL_ADVERTISE_PEER_URLS=“http://192.168.180.210:2380”
 ETCD_ADVERTISE_CLIENT_URLS=“http://192.168.180.210:2379”
 ETCD_INITIAL_CLUSTER=“controller=http://192.168.180.210:2380”
 ETCD_INITIAL_CLUSTER_TOKEN=“etcd-cluster-01”
 ETCD_INITIAL_CLUSTER_STATE=“new”
 EOF
systemctl enable etcd.service
 systemctl start etcd.service
 netstat -anpt | grep -E “2379|2380”
二、部署Keystone
 controller:
 mysql -u root -p123456 -e “create database keystone;”
 mysql -u root -p123456 -e “grant all privileges on keystone.* to ‘keystone’@‘localhost’ identified by ‘KEYSTONE_DBPASS’;”
 mysql -u root -p123456 -e “grant all privileges on keystone.* to ‘keystone’@‘%’ identified by ‘KEYSTONE_DBPASS’;”
 mysql -u root -p123456 -e “flush privileges;”
yum install openstack-keystone httpd mod_wsgi -y
配置openstack服务器
 cp -a /etc/keystone/keystone.conf{,.bak}
 grep -Ev “$|#” /etc/keystone/keystone.conf.bak > /etc/keystone/keystone.conf
 openstack-config --set /etc/keystone/keystone.conf database connection mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone
 openstack-config --set /etc/keystone/keystone.conf token provider fernet
su -s /bin/sh -c “keystone-manage db_sync” keystone
keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
 keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
 keystone-manage bootstrap --bootstrap-password ADMIN_PASS --bootstrap-admin-url http://controller:5000/v3/ --bootstrap-internal-url http://controller:5000/v3/ --bootstrap-public-url http://controller:5000/v3/ --bootstrap-region-id RegionOne
配置Apache服务器
 cp -a /etc/httpd/conf/httpd.conf{,.bak}
 echo “ServerName controller” >> /etc/httpd/conf/httpd.conf
 ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d
 systemctl enable httpd
 systemctl start httpd ---->注意查看5000端口是否开启
配置管理员账户的环境变量
 cat >> ~/.bashrc << EOF
 export OS_USERNAME=admin
 export OS_PASSWORD=ADMIN_PASS
 export OS_PROJECT_NAME=admin
 export OS_USER_DOMAIN_NAME=Default
 export OS_PROJECT_DOMAIN_NAME=Default
 export OS_AUTH_URL=http://controller:5000/v3/
 export OS_IDENTITY_API_VERSION=3
 EOF
source ~/.bashrc —>env | grep OS
 创建OpenStack域、项目、角色
 openstack project create --domain Default --description “Service Project” service —>创建service项目 查看openstack openstack project list
 openstack role create user —>创建user角色
 openstack role list —>查看角色
验证,基于脚本中的环境变量,直接请求令牌
 openstack token issue
部署Glance
 创建glance数据库、用户和表
 mysql -u root -p123456 -e “create database glance;”
 mysql -u root -p123456 -e “GRANT ALL PRIVILEGES ON glance.* TO ‘glance’@‘localhost’ IDENTIFIED BY ‘GLANCE_DBPASS’;”
 mysql -u root -p123456 -e “GRANT ALL PRIVILEGES ON glance.* TO ‘glance’@‘%’ IDENTIFIED BY ‘GLANCE_DBPASS’;”
 创建Openstack中的Glance用户 —>注意,首先要执行管理员环境变量脚本
 openstack user create --domain Default --password GLANCE_PASS glance —>查看 openstack user list
 openstack role add --project service --user glance admin —>openstack role assignment list
 openstack service create --name glance --description “OpenStack Image” image —>openstack service list
 创建镜像服务API端点
 openstack endpoint create --region RegionOne image public http://controller:9292 —>openstack endpoint list
 openstack endpoint create --region RegionOne image internal http://controller:9292
 openstack endpoint create --region RegionOne image admin http://controller:9292
 安装Glance包
 yum install openstack-glance -y
 配置Glance-api.conf
 cp -a /etc/glance/glance-api.conf{,.bak}
 cp -a /etc/glance/glance-registry.conf{,.bak}
 grep -Ev ‘^KaTeX parse error: Expected 'EOF', got '#' at position 2: |#?' /etc/glance/g…|#’ /etc/glance/glance-registry.conf.bak > /etc/glance/glance-registry.conf
 openstack-config --set /etc/glance/glance-api.conf database connection mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
 openstack-config --set /etc/glance/glance-api.conf keystone_authtoken www_authenticate_uri http://controller:5000
 openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_url http://controller:5000
 openstack-config --set /etc/glance/glance-api.conf keystone_authtoken memcached_servers controller:11211
 openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_type password
 openstack-config --set /etc/glance/glance-api.conf keystone_authtoken project_domain_name Default
 openstack-config --set /etc/glance/glance-api.conf keystone_authtoken user_domain_name Default
 openstack-config --set /etc/glance/glance-api.conf keystone_authtoken project_name service
 openstack-config --set /etc/glance/glance-api.conf keystone_authtoken username glance
 openstack-config --set /etc/glance/glance-api.conf keystone_authtoken password GLANCE_PASS
 openstack-config --set /etc/glance/glance-api.conf paste_deploy flavor keystone
 openstack-config --set /etc/glance/glance-api.conf glance_store stores file,http
 openstack-config --set /etc/glance/glance-api.conf glance_store default_store file
 openstack-config --set /etc/glance/glance-api.conf glance_store filesystem_store_datadir /var/lib/glance/images
 初始化数据库
 su -s /bin/sh -c “glance-manage db_sync” glance
 开启服务
 systemctl enable openstack-glance-api
 systemctl start openstack-glance-api —>netstat -anpt | grep 9292
 上传镜像
 openstack image create “cirros” --file cirros-0.5.1-x86_64-disk.img --disk-format qcow2 --container-format bare --public
 openstack image list
 ll /var/lib/glance/images
部署placement放置服务
 mysql -u root -p123456 -e “CREATE DATABASE placement;”
 mysql -u root -p123456 -e “GRANT ALL PRIVILEGES ON placement.* TO ‘placement’@‘localhost’ IDENTIFIED BY ‘PLACEMENT_DBPASS’;”
 mysql -u root -p123456 -e “GRANT ALL PRIVILEGES ON placement.* TO ‘placement’@‘%’ IDENTIFIED BY ‘PLACEMENT_DBPASS’;”
 创建用户与API实体
 openstack user create --domain Default --password PLACEMENT_PASS placement
 openstack role add --project service --user placement admin
 openstack service create --name placement --description “Placement API” placement
 openstack endpoint create --region RegionOne placement public http://controller:8778
 openstack endpoint create --region RegionOne placement internal http://controller:8778
 openstack endpoint create --region RegionOne placement admin http://controller:8778
 安装软件包
 yum install openstack-placement-api -y
 配置文件
 cp /etc/placement/placement.conf{,.bak}
 grep -Ev ‘^$|#’ /etc/placement/placement.conf.bak > /etc/placement/placement.conf
 openstack-config --set /etc/placement/placement.conf placement_database connection mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement
 openstack-config --set /etc/placement/placement.conf keystone_authtoken auth_url http://controller:5000/v3
 openstack-config --set /etc/placement/placement.conf keystone_authtoken memcached_servers controller:11211
 openstack-config --set /etc/placement/placement.conf keystone_authtoken auth_type password
 openstack-config --set /etc/placement/placement.conf keystone_authtoken project_domain_name Default
 openstack-config --set /etc/placement/placement.conf keystone_authtoken user_domain_name Default
 openstack-config --set /etc/placement/placement.conf keystone_authtoken project_name service
 openstack-config --set /etc/placement/placement.conf keystone_authtoken username placement
 openstack-config --set /etc/placement/placement.conf keystone_authtoken password PLACEMENT_PASS
 openstack-config --set /etc/placement/placement.conf api auth_strategy keystone
 导入数据库
 su -s /bin/sh -c “placement-manage db sync” placement
 cp -a /etc/httpd/conf.d/00-placement-api.conf{,.bak}
 修改OO-placement-api.conf
 cat >> /etc/httpd/conf.d/00-placement-api.conf << EOF
 <Directory /usr/bin>
 = 2.4>
 Require all granted
 
 <IfVersion < 2.4>
 Order allow,deny
 Allow from all
 
 
 EOF
 systemctl restart httpd
 检查placement
 placement-status upgrade check
部署Nove计算服务 —>控制节点、计算节点
 controller:
 创建数据库
 mysql -u root -p123456 -e “CREATE DATABASE nova_api;”
 mysql -u root -p123456 -e “CREATE DATABASE nova;”
 mysql -u root -p123456 -e “CREATE DATABASE nova_cell0;”
 mysql -u root -p123456 -e “GRANT ALL PRIVILEGES ON nova_api.* TO ‘nova’@‘localhost’ IDENTIFIED BY ‘NOVA_DBPASS’;”
 mysql -u root -p123456 -e “GRANT ALL PRIVILEGES ON nova_api.* TO ‘nova’@‘%’ IDENTIFIED BY ‘NOVA_DBPASS’;”
 mysql -u root -p123456 -e “GRANT ALL PRIVILEGES ON nova.* TO ‘nova’@‘localhost’ IDENTIFIED BY ‘NOVA_DBPASS’;”
 mysql -u root -p123456 -e “GRANT ALL PRIVILEGES ON nova.* TO ‘nova’@‘%’ IDENTIFIED BY ‘NOVA_DBPASS’;”
 mysql -u root -p123456 -e “GRANT ALL PRIVILEGES ON nova_cell0.* TO ‘nova’@‘localhost’ IDENTIFIED BY ‘NOVA_DBPASS’;”
 mysql -u root -p123456 -e “GRANT ALL PRIVILEGES ON nova_cell0.* TO ‘nova’@‘%’ IDENTIFIED BY ‘NOVA_DBPASS’;”
 创建nova用户及实体
 openstack user create --domain Default --password NOVA_PASS nova
 openstack role add --project service --user nova admin
 openstack service create --name nova --description “Openstack Compute” compute
 创建API服务端点
 openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1
 openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1
 openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1
 安装软件包
 yum install openstack-nova-api openstack-nova-conductor openstack-nova-novncproxy openstack-nova-scheduler -y
 修改配置文件
 cp -a /etc/nova/nova.conf{,.bak}
 grep -Ev ‘^KaTeX parse error: Expected 'EOF', got '#' at position 2: |#?' /etc/nova/nov…my_ip’
 openstack-config --set /etc/nova/nova.conf vnc server_proxyclient_address ‘$my_ip’
 openstack-config --set /etc/nova/nova.conf glance api_servers http://controller:9292
 openstack-config --set /etc/nova/nova.conf oslo_concurrency lock_path /var/lib/nova/tmp
 openstack-config --set /etc/nova/nova.conf placement region_name RegionOne
 openstack-config --set /etc/nova/nova.conf placement project_domain_name Default
 openstack-config --set /etc/nova/nova.conf placement project_name service
 openstack-config --set /etc/nova/nova.conf placement auth_type password
 openstack-config --set /etc/nova/nova.conf placement user_domain_name Default
 openstack-config --set /etc/nova/nova.conf placement auth_url http://controller:5000/v3
 openstack-config --set /etc/nova/nova.conf placement username placement
 openstack-config --set /etc/nova/nova.conf placement password PLACEMENT_PASS
 导入数据库
 su -s /bin/sh -c “nova-manage api_db sync” nova
 su -s /bin/sh -c “nova-manage cell_v2 map_cell0” nova
 su -s /bin/sh -c “nova-manage cell_v2 create_cell --name=cell1 --verbose” nova
 su -s /bin/sh -c “nova-manage db sync” nova
 验证nova cell0和cell1是否正确注册
 su -s /bin/sh -c “nova-manage cell_v2 list_cells” nova
 开启服务
 systemctl enable openstack-nova-api openstack-nova-conductor openstack-nova-scheduler openstack-nova-novncproxy
 systemctl start openstack-nova-api openstack-nova-conductor openstack-nova-scheduler openstack-nova-novncproxy
在compute01安装配置nova服务
 yum install centos-release-openstack-train
 yum install python-openstackclient openstack-selinux openstack-utils -y
 yum install -y python2-qpid-proton -y
 yum install openstack-nova-compute -y
 cp -a /etc/nova/nova.conf{,.bak}
 grep -Ev ‘^KaTeX parse error: Expected 'EOF', got '#' at position 2: |#?' /etc/nova/nov…my_ip’
 openstack-config --set /etc/nova/nova.conf vnc novncproxy_base_url http://192.168.180.210:6080/vnc_auto.html
 openstack-config --set /etc/nova/nova.conf glance api_servers http://controller:9292
 openstack-config --set /etc/nova/nova.conf oslo_concurrency lock_path /var/lib/nova/tmp
 openstack-config --set /etc/nova/nova.conf placement region_name RegionOne
 openstack-config --set /etc/nova/nova.conf placement project_domain_name Default
 openstack-config --set /etc/nova/nova.conf placement project_name service
 openstack-config --set /etc/nova/nova.conf placement auth_type password
 openstack-config --set /etc/nova/nova.conf placement user_domain_name Default
 openstack-config --set /etc/nova/nova.conf placement auth_url http://controller:5000/v3
 openstack-config --set /etc/nova/nova.conf placement username placement
 openstack-config --set /etc/nova/nova.conf placement password PLACEMENT_PASS
 openstack-config --set /etc/nova/nova.conf libvirt virt_type qemu
 是否支持虚拟机硬件加速
 egrep -c ‘(vmx|svm)’ /proc/cpuinfo
 返回值是0,配置nova.conf
 vim /etc/nova/nova.conf
 [libvirt]
 virt_type = qemu
 启动服务
 systemctl enable libvirtd.service openstack-nova-compute.service
 systemctl start libvirtd.service openstack-nova-compute.service
 
controller:
 在controller节点做:
 添加节点到cell数据库
 openstack compute service list --service nova-compute
 发现计算节点
 su -s /bin/sh -c “nova-manage cell_v2 discover_hosts --verbose” nova
 设置发现时间间隔
 vim /etc/nova/nova.conf
 [scheduler]
 discover_hosts_in_cells_interval = 300
 重启服务
 systemctl restart openstack-nova-api
 验证计算服务
 openstack compute service list
查看各个组件的api是否正常
 openstack catalog list
 查看是否镜像
 openstack image list
 查看cell的api和placement的api是否正常,只要其中一个有误,后期无法创建虚拟机
 nova-status upgrade check
 
controller配置:
 部署Neutron服务
 创建数据库
 mysql -u root -p123456 -e “CREATE DATABASE neutron;”
 mysql -u root -p123456 -e “GRANT ALL PRIVILEGES ON neutron.* TO ‘neutron’@‘localhost’ IDENTIFIED BY ‘NEUTRON_DBPASS’;”
 mysql -u root -p123456 -e “GRANT ALL PRIVILEGES ON neutron.* TO ‘neutron’@‘%’ IDENTIFIED BY ‘NEUTRON_DBPASS’;”
 创建用户
 openstack user create --domain Default --password NEUTRON_PASS neutron
 openstack role add --project service --user neutron admin
 创建Neutron服务实体及API端口
 openstack service create --name neutron --description “OpenStack Networking” network
 openstack endpoint create --region RegionOne network public http://controller:9696
 openstack endpoint create --region RegionOne network internal http://controller:9696
 openstack endpoint create --region RegionOne network admin http://controller:9696
 安装软件包
 yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables -y
 修改配置文件
 cp -a /etc/neutron/neutron.conf{,.bak}
 grep -Ev ‘^KaTeX parse error: Expected 'EOF', got '#' at position 2: |#?' /etc/neutron/…|#’ /etc/neutron/plugins/ml2/ml2_conf.ini.bak > /etc/neutron/plugins/ml2/ml2_conf.ini
 openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers flat,vlan,vxlan
 openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types vxlan
 openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers linuxbridge,l2population
 openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 extension_drivers port_security
 openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_flat flat_networks provider
 openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_ipset true
 openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_vxlan vni_ranges 1:1000
 修改 linux bridge network provider 配置文件 linuxbridge_agent.ini
 cp -a /etc/neutron/plugins/ml2/linuxbridge_agent.ini{,.bak}
 grep -Ev ‘^KaTeX parse error: Expected 'EOF', got '#' at position 2: |#?' /etc/neutron/…|#’ /etc/neutron/dhcp_agent.ini.bak > /etc/neutron/dhcp_agent.ini
 openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT interface_driver linuxbridge
 openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT dhcp_driver neutron.agent.linux.dhcp.Dnsmasq
 openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT enable_isolated_metadata true
 配置元数据代理, 用于和 nova 通讯
 cp -a /etc/neutron/metadata_agent.ini{,.bak}
 grep -Ev ‘^KaTeX parse error: Expected 'EOF', got '#' at position 2: |#?' /etc/neutron/…|#’ /etc/neutron/l3_agent.ini.bak > /etc/neutron/l3_agent.ini
 openstack-config --set /etc/neutron/l3_agent.ini DEFAULT interface_driver linuxbridge
 openstack-config --set /etc/neutron/l3_agent.ini DEFAULT external_network_bridge
 创建 ML2 插件文件符号连接
 ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
 同步数据库
 su -s /bin/sh -c “neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head” neutron
 重启nova-API服务
 systemctl restart openstack-nova-api.service
 启动neutron
 systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service
 systemctl start neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service
netstat -anpt | grep 9696
在compute01安装配置neutron服务
 yum install openstack-neutron-linuxbridge ebtables ipset -y
 修改配置文件
 cp -a /etc/neutron/neutron.conf{,.bak}
 grep -Ev ‘^KaTeX parse error: Expected 'EOF', got '#' at position 2: |#?' /etc/neutron/…|#’ /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak > /etc/neutron/plugins/ml2/linuxbridge_agent.ini
 openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini linux_bridge physical_interface_mappings provider:ens33
 openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan enable_vxlan true
 openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan local_ip 192.168.180.200
 openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan l2_population true
 openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup enable_security_group true
 openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
 修改内核参数
 echo -e ‘net.bridge.bridge-nf-call-iptables=1\nnet.bridge.bridge-nf-call-ip6tables=1’ >> /etc/sysctl.conf
 modprobe br_netfilter
 sysctl -p
 修改nova配置文件
 openstack-config --set /etc/nova/nova.conf neutron auth_url http://controller:5000
 openstack-config --set /etc/nova/nova.conf neutron auth_type password
 openstack-config --set /etc/nova/nova.conf neutron project_domain_name Default
 openstack-config --set /etc/nova/nova.conf neutron user_domain_name Default
 openstack-config --set /etc/nova/nova.conf neutron region_name RegionOne
 openstack-config --set /etc/nova/nova.conf neutron project_name service
 openstack-config --set /etc/nova/nova.conf neutron username neutron
 openstack-config --set /etc/nova/nova.conf neutron password NEUTRON_PASS
 启动网络服务
 systemctl restart openstack-nova-compute.service
 systemctl enable neutron-linuxbridge-agent.service
 systemctl start neutron-linuxbridge-agent.service
controller节点验证
 验证操作
 openstack extension list --network
 openstack network agent list
 
compute01节点部署
 部署Dashboard
 yum install openstack-dashboard httpd -y
 修改配置文件
 cp -a /etc/openstack-dashboard/local_settings{,.bak}
 vim /etc/openstack-dashboard/local_settings
 ALLOWED_HOSTS = [‘*’] ------>P39
 ‘LOCATION’: ‘controller:11211’ ------>P97 注意需要将前后几行注释取消
 OPENSTACK_HOST = “controller” ------>p118
 OPENSTACK_KEYSTONE_URL = “http://%s:5000/v3” % OPENSTACK_HOST ------>p119
 OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True ------>新增
 OPENSTACK_API_VERSIONS = {
 “identity”: 3,
 “image”: 2,
 “volume”: 3,
 }
 OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = “Default”
 OPENSTACK_KEYSTONE_DEFAULT_ROLE = “user”
‘enable_lb’: False, —新增到OPENSTACK_NEUTRON_NETWORK内
 ‘enable_firewall’: False,
 ‘enable_vpn’: False,
TIME_ZONE = “Asia/Shanghai” ---->P158
重启服务
 cd /usr/share/openstack-dashboard
 python manage.py make_web_conf --apache > /etc/httpd/conf.d/openstack-dashboard.conf
 systemctl enable httpd.service
 systemctl restart httpd.service
 
在controller节点做:
 重启memcached服务
 systemctl restart memcached.service
在controller节点验证
 openstack extension list --network
 openstack network agent list
 打开前端页面http://192.168.180.200
 域:default
 用户名:admin
 密码:ADMIN_PASS
在controller节点配置:
 部署Cinder
 创建数据库
 mysql -u root -p123456 -e"CREATE DATABASE cinder;"
 mysql -u root -p123456 -e"GRANT ALL PRIVILEGES ON cinder.* TO ‘cinder’@‘localhost’ IDENTIFIED BY ‘CINDER_DBPASS’;"
 mysql -u root -p123456 -e"GRANT ALL PRIVILEGES ON cinder.* TO ‘cinder’@‘%’ IDENTIFIED BY ‘CINDER_DBPASS’;"
 创建 Cinder 服务凭据
 openstack user create --domain Default --password CINDER_PASS cinder
 openstack role add --project service --user cinder admin
 openstack service create --name cinderv2 --description “OpenStack Block Storage” volumev2
 openstack service create --name cinderv3 --description “OpenStack Block Storage” volumev3
 配置API
 openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%(project_id)s
 openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%(project_id)s
 openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%(project_id)s
 openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%(project_id)s
 openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%(project_id)s
 openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%(project_id)s
 配置软件包
 yum install openstack-cinder -y
 配置文件
 cp /etc/cinder/cinder.conf{,.bak}
 grep -Ev ‘#|^$’ /etc/cinder/cinder.conf.bak>/etc/cinder/cinder.conf
 openstack-config --set /etc/cinder/cinder.conf database connection mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
 openstack-config --set /etc/cinder/cinder.conf DEFAULT transport_url rabbit://openstack:RABBIT_PASS@controller
 openstack-config --set /etc/cinder/cinder.conf DEFAULT auth_strategy keystone
 openstack-config --set /etc/cinder/cinder.conf keystone_authtoken www_authenticate_uri http://controller:5000
 openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_url http://controller:5000
 openstack-config --set /etc/cinder/cinder.conf keystone_authtoken memcached_servers controller:11211
 openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_type password
 openstack-config --set /etc/cinder/cinder.conf keystone_authtoken project_domain_name Default
 openstack-config --set /etc/cinder/cinder.conf keystone_authtoken user_domain_name Default
 openstack-config --set /etc/cinder/cinder.conf keystone_authtoken project_name service
 openstack-config --set /etc/cinder/cinder.conf keystone_authtoken username cinder
 openstack-config --set /etc/cinder/cinder.conf keystone_authtoken password CINDER_PASS
 openstack-config --set /etc/cinder/cinder.conf DEFAULT my_ip 192.168.180.210
 openstack-config --set /etc/cinder/cinder.conf oslo_concurrency lock_path /var/lib/cinder/tmp
 同步数据库
 su -s /bin/sh -c “cinder-manage db sync” cinder
 修改nova配置文件
 vim /etc/nova/nova.conf
 [cinder]
 os_region_name = RegionOne
 systemctl restart openstack-nova-api.service
 启动cinder服务
 systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
 systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service
在block01配置Cinder
 yum install centos-release-openstack-train -y
 yum install python-openstackclient openstack-selinux openstack-cinder targetcli python-keystone lvm2 device-mapper-persistent-data -y
 systemctl enable lvm2-lvmetad.service
 systemctl start lvm2-lvmetad.service
 创建LVM物理卷和卷组
 pvcreate /dev/sdb
 vgcreate cinder-volumes /dev/sdb
 修改LVM配置文件
 vim /etc/lvm/lvm.conf
 devices {
 filter = [ “a/sdb/”, “r/.*/”]
 }
 配置Cinder
 yum install -y openstack-utils
 cp /etc/cinder/cinder.conf{,.bak}
 grep -Ev ‘#|^$’ /etc/cinder/cinder.conf.bak>/etc/cinder/cinder.conf
 openstack-config --set /etc/cinder/cinder.conf DEFAULT transport_url rabbit://openstack:RABBIT_PASS@controller
 openstack-config --set /etc/cinder/cinder.conf DEFAULT auth_strategy keystone
 openstack-config --set /etc/cinder/cinder.conf DEFAULT my_ip 192.168.180.190
 openstack-config --set /etc/cinder/cinder.conf DEFAULT enabled_backends lvm
 openstack-config --set /etc/cinder/cinder.conf DEFAULT glance_api_servers http://controller:9292
 openstack-config --set /etc/cinder/cinder.conf database connection mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
 openstack-config --set /etc/cinder/cinder.conf keystone_authtoken www_authenticate_uri http://controller:5000
 openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_url http://controller:5000
 openstack-config --set /etc/cinder/cinder.conf keystone_authtoken memcached_servers controller:11211
 openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_type password
 openstack-config --set /etc/cinder/cinder.conf keystone_authtoken project_domain_name Default
 openstack-config --set /etc/cinder/cinder.conf keystone_authtoken user_domain_name Default
 openstack-config --set /etc/cinder/cinder.conf keystone_authtoken project_name service
 openstack-config --set /etc/cinder/cinder.conf keystone_authtoken username cinder
 openstack-config --set /etc/cinder/cinder.conf keystone_authtoken password CINDER_PASS
 openstack-config --set /etc/cinder/cinder.conf lvm volume_driver cinder.volume.drivers.lvm.LVMVolumeDriver
 openstack-config --set /etc/cinder/cinder.conf lvm volume_group cinder-volumes
 openstack-config --set /etc/cinder/cinder.conf lvm target_protocol iscsi
 openstack-config --set /etc/cinder/cinder.conf lvm target_helper lioadm
 openstack-config --set /etc/cinder/cinder.conf oslo_concurrency lock_path /var/lib/cinder/tmp
 启动服务
 systemctl enable openstack-cinder-volume.service target.service
 systemctl start openstack-cinder-volume.service target.service
控制节点验证
 openstack volume service list
云主机上网的问题:
 1、确定controller节点上的NetworkManager是关闭 systemctl stop NetworkManager
 2、在controller节点上添加一块网卡,并配置上IP地址(跟ens33同网段IP)
 1)给ens37配置上地址,重启controller节点
 2)在web界面上,添加外部网络时,要注意网段和controller节点的网段相同,类型为flat,物理网卡要写成provider
 3)给云主机添加接口时,选择外部网络(桥接),不需要创建路由
云主机上使用CCentOS-7-x86_64-GenericCloud-2211.qcow2c镜像
 在创建实例时,在配置中添加如下代码用于修改密码:
 #!/bin/bash
 passwd root<<EOF
 abc-1234
 abc-1234
 EOF
本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。 如若内容造成侵权/违法违规/事实不符,请联系我的编程经验分享网邮箱:veading@qq.com进行投诉反馈,一经查实,立即删除!