环境配置 机器配置: 3台8v8G的虚拟机,1台做控制节点2台做融合节点。
网络划分: 192.168.122.0/24 public网络 192.168.3.0/24 存储网络 192.168.4.0/24管理网络、sdn隧道网络
我这里配置了本地源,就不用在手动配置官网源本地源的搭建和配置会在另外一个文档说明。 节点网络信息 :
管理网络和随道网络
存储网络
公网
控制节点
192.168.4.6
192.168.3.5
192.168.122.2
计算节点
192.168.4.7
192.168.3.6
192.168.125.5
计算节点
192.168.4.8
192.168.3.7
192.168.122.6
网络拓扑
安装chrony 控制节点向外同步时间,其他节点如计算节点都直接同步控制节点 yum install chrony
修改配置文件 vim /etc/chrony.conf 添加下面这两条 server cn.ntp.org.cn iburst allow 192.168.4.0/24
设置开机启动 systemctl enable chrony systemctl start chrony
其他节点: yum install chrony
修改配置文件 vim /etc/chrony.conf
添加下面这两条 server 192.168.4.6 iburst
设置开机启动 systemctl enable chrony
启动进程 systemctl start chrony
安装openstack客户端 yum install python-openstackclient
安装Mariadb(数据库服务) vim /etc/my.cnf.d/openstack.cnf1
2
3
4
5
6
7
[mysqld]
bind-address = 192.168.4.6 #填写管理网段ip
default-storage-engine = innodb
innodb_file_per_table
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8
设置开机启动
systemctl enable mariadb
启动Mariadb
systemctl start mariadb
安装rabbitmq(用于消息队列) yum install rabbitmq-server
设置开机启动
systemctl enable rabbitmq-server
开启rabbitmq
systemctl start rabbitmq-server
创建openstack用户和配置密码
rabbitmqctl add_user openstack 123456
给openstack用户配置读和写权限
rabbitmqctl set_permissions openstack ".*" ".*" ".*"
安装memcache(缓存token) yum install memcached phython-memcached
systemctl enable memcached
systemctl start memcached
安装keystone(认证服务) 连接数据库
[root@control-node1 yum.repos.d]# mysql
创建keystone数据库
create database keystone;
数据库授权 密码自己设置,这里为了方便设置123456grant all privileges on keystone.* to 'keystone'@'localhost' identified by '123456';
grant all privileges on keystone.* to 'keystone'@'%' identified by '123456';
keystone使用httpd的mod_wsgi运行在端口5000和35357处理认证服务请求。默认情况下,keystone服务依然监听在5000和35357端口。
安装 keystone和wsgi
yum install openstack-keystone httpd mod_wsgi
修改keystone配置文件
vim /etc/keystone/keystone.conf
connection = mysql+pymysql://keystone:123456@192.168.4.6/keystone #加入连接数据库配置
配置使用哪种产生token方式目前keystone支持4种(UUID、PKI、PKIZ、Fernet)这里我们配置fernethttp://www.tuicool.com/articles/jQJNFrn 这篇文章有几种模式的详细介绍。
[token] provider = fernet
同步数据库
su -s /bin/sh -c “keystone-manage db_sync” keystone #发现同步数据库就是错了也没有反应,需要检查keystone的日志文件查看是否正确
初始化fernet key keystone-manage fernet_setup –keystone-user keystone –keystone-group keystone
keystone-manage credential_setup –keystone-user keystone –keystone-group keystoen
创建Bootstrap the Identity service(就是创建admin用户的帐号信息)
1
2
3
4
5
keystone-manage bootstrap --bootstrap-password 123456 \
--bootstrap-admin-url http://192.168.4.6:35357/v3/ \
--bootstrap-internal-url http://192.168.4.6:35357/v3/ \
--bootstrap-public-url http://192.168.122.2:5000/v3/ \
--bootstrap-region-id RegionOne
配置apache服务器 vim /etc/httpd/conf/httpd.conf
配置成管理网段的ip ServerName 192.168.4.6
将keystone的配置文件软链接到apache的配置文件 ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
设置开机启动 systemctl enable httpd
启动httpd systemctl start httpd 检查端口
1
2
3
4
5
6
7
8
lsof -i:5000
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
httpd 18883 root 6u IPv6 57978 0t0 TCP *:commplex-main (LISTEN)
httpd 18894 apache 6u IPv6 57978 0t0 TCP *:commplex-main (LISTEN)
httpd 18895 apache 6u IPv6 57978 0t0 TCP *:commplex-main (LISTEN)
httpd 18896 apache 6u IPv6 57978 0t0 TCP *:commplex-main (LISTEN)
httpd 18897 apache 6u IPv6 57978 0t0 TCP *:commplex-main (LISTEN)
httpd 18898 apache 6u IPv6 57978 0t0 TCP *:commplex-main (LISTEN)
到root下创建环境变量文件
1
2
3
4
5
6
7
8
9
vim /root/openrc
#!/bin/bash
export OS_USERNAME=admin
export OS_PASSWORD=123456 #这个密码是上面Bootstrap the Identity service填的密码
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=default
export OS_PROJECT_DOMAIN_NAME=default
export OS_AUTH_URL=http://192.168.4.6:35357/v3
export OS_IDENTITY_API_VERSION=3
创建域、项目、用户 创建service projectopenstack project create --domain default --description "Service Project" service
创建user角色openstack role create user
这里不创建普通用户了 测试admin用户获取tokenopenstack --os-auth-url http://192.168.4.6:35357/v3 token issue
安装glance镜像服务 连接Mariadb创建数据库
create database glance;
授权1
2
3
grant all privileges on glance.* to 'glance'@'localhost' identified by '123456';
grant all privileges on glance.* to 'glance'@'%' identified by '123456';
grant all privileges on glance.* to 'glance'@'control-node1.novalocal' identified by '123456';
control-xxx换成主机名,我这里就算api.conf里面配置的ip默认还是去连接host是主机名,所以只能在加个主机名授权。
创建glance用户并设置密码 openstack user create –domain default –password-prompt glance
给glance用户添加admin角色权限 openstack role add –project service –user glance admin
创建glance service openstack service create –name glance –description “OpenStack Image” image
创建glance endpoint1
2
3
openstack endpoint create --region RegionOne image public http://192.168.122.2:9292
openstack endpoint create --region RegionOne image internal http://192.168.4.6:9292
openstack endpoint create --region RegionOne image admin http://192.168.4.6:9292
安装软件包 yum install openstack-glance
配置glance vim /etc/glance/glance-api.conf
配置数据库1
2
[database]
connection = mysql+pymysql://glance:123456@192.168.4.6/glance
配置glance vim /etc/glance/glance-api.conf 配置数据库1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
[database]
connection = mysql+pymysql://glance:123456@192.168.4.6/glance
配置keystone
[keystone_authtoken]
auth_uri = http://192.168.4.6:5000
auth_url = http://192.168.4.6:35357
memcached_servers = 192.168.4.6:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = 123456
[paste_deploy]
flavor = keystone
配置镜像存储1
2
3
4
[glance_store]
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/
vim /etc/glance/glance-registry.conf1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
[database]
connection = mysql+pymysql://glance:123456@192.168.4.6/glance
配置keystone
[keystone_authtoken]
auth_uri = http://192.168.4.6:5000
auth_url = http://192.168.4.6:35357
memcached_servers = 192.168.4.6:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = 123456
[paste_deploy]
flavor = keystone
同步数据库 su -s /bin/sh -c “glance-manage db_sync” glance
设置开机启动 systemctl enable openstack-glance-api
启动服务 systemctl start openstack-glance-api
这些做完后最好在查看下日志,看看是否有错误,每部署完一个组件都这样,这样出错的可以很快定位。
下载cirros测试一下
wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img openstack image create "cirros" --file cirros-0.3.4-x86_64-disk.img --disk-format qcow2 --container-format bare --public
glance image-list 检查一下镜像是否上传成功
安装nova组件 控制节点安装 创建数据库 create database nova_api; create database nova;
授权1
2
3
4
5
6
7
8
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'control-node1.novalocal' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'control-node1.novalocal' IDENTIFIED BY '123456';
创建为nova组件创建用户、service、endpointopenstack user create --domain default --password-prompt nova
给nova用户添加admin角色权限openstack role add --project service --user nova admin
创建serviceopenstack service create --name nova --description "OpenStack Compute" compute
创建endpoint1
2
3
4
5
openstack endpoint create --region RegionOne compute public http://192.168.122.2:8774/v2.1/%\(tenant_id\)s
openstack endpoint create --region RegionOne compute internal http://192.168.4.6:8774/v2.1/%\(tenant_id\)s
openstack endpoint create --region RegionOne compute admin http://192.168.4.6:8774/v2.1/%\(tenant_id\)s
安装nova-api组件yum install openstack-nova-api openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler
配置nova vim /etc/nova/nova.conf
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
[DEFAULT]
transport_url = rabbit://openstack:123456@192.168.4.6 #配置rabbitmq帐户和密码
my_ip = 192.168.4.6
use_neutron = True
firewall_drive = nova.virt.firewall.NoopFirewallDriver
enabled_apis=osapi_compute,metadata
auth_strategy=keystone
[api_database]
connection = mysql+pymysql://nova:123456@192.168.4.6/nova_api #配置nova连接数据库
[database]
connection = mysql+pymysql://nova:123456@192.168.4.6/nova
[keystone_authtoken]
auth_uri = http://192.168.4.6:5000
auth_url = http://192.168.4.6:35357
memcached_servers = 192.168.4.6:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = 123456
配置novnc
novncproxy_port=6080
novncproxy_base_url=http://211.156.182.144:6080/vnc_auto.html
vncserver_listen=192.168.4.6
[glance]
api_servers = http://192.168.4.6:9292
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
同步数据库 su -s /bin/sh -c “nova-manage api_db sync” nova su -s /bin/sh -c “nova-manage db sync” nova
1
2
3
4
5
6
7
8
9
10
11
systemctl enable openstack-nova-api.service \
openstack-nova-consoleauth.service \
openstack-nova-scheduler.service \
openstack-nova-conductor.service \
openstack-nova-novncproxy.service
systemctl start openstack-nova-api.service \
openstack-nova-consoleauth.service \
openstack-nova-scheduler.service \
openstack-nova-conductor.service \
openstack-nova-novncproxy.service
计算机节点安装 安装nova-compute yum install openstack-nova-compute
配置nova-compute vim /etc/nova/nova.conf1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:123456@192.168.4.6 #配置rabbitmq帐号和密码
auth_strategy = keystone
my_ip = 192.168.4.7
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[keystone_authtoken]
auth_uri = http://192.168.4.6:5000
auth_url = http://192.168.4.6:35357
memcached_servers = 192.168.4.6:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = 123456
[vnc]
enabled=true
vncserver_listen=0.0.0.0
vncserver_proxyclient_address=192.168.4.7 #填写本机ip
novncproxy_base_url=http://211.156.182.144:6080/vnc_auto.html #这个填你要用控制节点的public ip
[glance]
api_servers = http://192.168.4.6:9292
配置锁路径
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[libvirt]
virt_type = qemu #物理服务器就配置kvm虚拟机就配置qemu
设置开机启动 systemctl enable libvirtd.service openstack-nova-compute.service
启动nova-compute systemctl start libvirtd.service openstack-nova-compute.service
在控制节点查看检查一下compute进程根控制节点连接
配置neutron 控制节点安装 我这里使用openvswitch不使用linux bridge,因为openvswitch功能比linux Brige功能强太多了。但配置稍微复杂点。
创建数据库 create database neutron;
授权1
2
3
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'control-node1.novalocal' IDENTIFIED BY '123456';
创建neutron用户并设置密码openstack user create --domain default --password-prompt neutron
给neutron用户添加admin角色权限openstack role add --project service --user neutron admin
创建neutron serviceopenstack service create --name neutron --description "OpenStack Networking" network
创建neutron endpoint1
2
3
openstack endpoint create --region RegionOne network public http://192.168.122.2:9696
openstack endpoint create --region RegionOne network admin http://192.168.4.6:9696
openstack endpoint create --region RegionOne network internal http://192.168.4.6:9696
安装neutron组件 yum -y install openstack-neutron openstack-neutron-ml2 openstack-neutron-openvswitch
vim /etc/neutron/neutron.conf1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
[DEFAULT]
service_plugins = router
transport_url = rabbit://openstack:123456@192.168.4.6
auth_strategy = keystone
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
state_path = /var/lib/neutron
use_syslog = True
syslog_log_facility = LOG_LOCAL4
log_dir =/var/log/neutron
core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin
base_mac = fa:16:3e:00:00:00
mac_generation_retries = 32
dhcp_lease_duration = 600
dhcp_agent_notification = True
allow_bulk = True
allow_pagination = False
allow_sorting = False
allow_overlapping_ips = True
advertise_mtu = True
agent_down_time = 30
router_scheduler_driver = neutron.scheduler.l3_agent_scheduler.ChanceScheduler
allow_automatic_l3agent_failover = True
dhcp_agents_per_network = 2
api_workers = 9
rpc_workers = 9
network_device_mtu=1450
[database]
connection = mysql+pymysql://neutron:123456@192.168.4.6/neutron
[keystone_authtoken]
auth_uri = http://192.168.4.6:5000
auth_url = http://192.168.4.6:35357
memcached_servers = 192.168.4.6:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = 123456
[nova]
auth_url = http://192.168.4.6:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = 123456
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
配置modular layer 2(ml2)插件
vim /etc/neutron/plugins/ml2/ ml2_conf.ini1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
[DEFAULT]
type_drivers = flat,vxlan
tenant_network_types = vxlan
mechanism_drivers = openvswitch,l2population
extension_drivers = port_security
[ml2]
path_mtu = 1450
type_drivers = flat,vxlan
tenant_network_types = vxlan
physical_network_mtus =physnet1:1500
[ml2_type_flat]
flat_networks =*
[ml2_type_vxlan]
vni_ranges =2:65535
vxlan_group =224.0.0.1
[securitygroup]
enable_security_group = True
firewall_driver=neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
[ovs]
local_ip=192.168.4.6
tunnel_bridge=br-tun
enable_tunneling=True
integration_bridge=br-int
bridge_mappings=physnet1:br-ex
配置l3-agent vim /etc/neutron/l3_agent.ini1
2
3
4
5
6
7
8
9
10
[DEFAULT]
debug = False
interface_driver =neutron.agent.linux.interface.OVSInterfaceDriver
handle_internal_only_routers = True
metadata_port = 8775
send_arp_for_ha = 3
periodic_interval = 40
periodic_fuzzy_delay = 5
enable_metadata_proxy = True
router_delete_namespaces = True
配置dhcp_agent vim /etc/neutron/dhcp_agent.ini1
2
3
4
5
6
7
8
9
10
[DEFAULT]
resync_interval = 30
interface_driver =neutron.agent.linux.interface.OVSInterfaceDriver
enable_isolated_metadata = True
enable_metadata_network = False
dhcp_domain = openstacklocal
dhcp_broadcast_reply = False
dhcp_delete_namespaces = True
root_helper=sudo neutron-rootwrap /etc/neutron/rootwrap.conf
state_path=/var/lib/neutron
vim /etc/neutron/plugins/ml2/openvswitch_agent.ini1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
[agent]
polling_interval = 2
tunnel_types = vxlan
vxlan_udp_port = 4789
l2_population = True
prevent_arp_spoofing = False
extensions =
[ovs]
local_ip=192.168.4.6
tunnel_bridge=br-tun
enable_tunneling=True
integration_bridge=br-int
bridge_mappings=physnet1:br-ex
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
enable_security_group = true
设置openvswitch开机启动 systemctl enable openvswitch.service
启动openvswitch systemctl start openvswitch
创建br-ex br-tun br-int ovs-vsctl add-br br-int ovs-vsctl add-br br-ex ovs-vsctl add-br br-tun
将上外网网卡挂载到br-ex上 ovs-vsctl add-port br-ex eth2
设置开机启动项 systemctl enable neutron-openvswitch-agent.service 启动进程 systemctl start neutron-openvswitch-agent.service
配置计算节点neutron 配置一下内核参数 修改配置文件 /etc/sysctl.conf1
2
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0
sysctl –p yum -y install openstack-neutron openstack-neutron-ml2 openstack-neutron-openvswitch
vim /etc/neutron/neutron.conf
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
[DEFAULT]
transport_url = rabbit://openstack:123456@192.168.4.6
auth_strategy = keystone
[keystone_authtoken]
auth_uri = http://192.168.4.6:5000
auth_url = http://192.168.4.6:35357
memcached_servers = 192.168.4.6:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = 123456
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
vim /etc/neutron/plugins/ml2/ml2_conf.ini1
2
3
4
5
6
7
8
9
10
11
[ml2]
path_mtu = 1450
type_drivers = flat,vxlan
tenant_network_types = vxlan
physical_network_mtus =physnet1:1500
mechanism_drivers = openvswitch,l2population
extension_drivers = port_security
[securitygroup]
enable_ipset = true
firewall_driver=neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
vim /etc/neutron/plugins/ml2/openvswitch_agent.ini1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
[ovs]
local_ip=192.168.4.7
tunnel_bridge=br-tun
enable_tunneling=True
integration_bridge=br-int
bridge_mappings=physnet1:br-ex
[agent]
enable_distributed_routing=True
prevent_arp_spoofing=True
arp_responder=True
polling_interval=2
drop_flows_on_start=False
vxlan_udp_port=4789
l2_population=True
tunnel_types=vxlan
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
enable_security_group = true
systemctl enable openvswitch.service systemctl start openvswitch.service
创建br-ex、br-int、br-tun ovs-vsctl add-br br-int ovs-vsctl add-br br-ex ovs-vsctl add-br br-tun
vim /etc/nova/nova.conf1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[DEFAULT]
network_api_class = nova.network.neutronv2.api.API
security_group_api = neutron
linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
[neutron]
url = http://192.168.4.6:9696
auth_url = http://192.168.4.6:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = 123456
systemctl restart neutron-openvswitch-agent.service systemctl restart openstack-nova-compute
安装完后可以在在控制节点检查是否安装成功
安装控制台 yum install openstack-dashboard vim /etc/openstack-dashboard/local_settings1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
这里配置控制节点ip
OPENSTACK_HOST = "192.168.4.6"
配置允许所有节点访问
ALLOWED_HOSTS = ['*', ]
配置memcache
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': '192.168.4.6:11211',
}
}
配置keystone v3验证
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
配置域
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = 'default'
配置api版本
OPENSTACK_API_VERSIONS = {
"identity": 3,
"image": 2,
"volume": 2,
}
设置通过控制台默认创建用户的角色是user
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
重启服务 systemctl restart httpd.service memcached.service
通过http://control_ip/dashboard可以访问
Admin 登录,密码是你通过keystone创建的,如果不记得查看openrc
创建flat网络做float_ip池 管理员—>网络——>创建网络
Phynet1是在ml2.ini里面bridge_mappings定义的br-ex对应的名字,创建完后增加子网,然后在创建个普通网络,创建个路由器,路由器绑定普通子网,创建个主机配置,然后创建vm加入到你创建的普通网络
这时在vm所在的计算节点或控制节点 ovs-vsctl show
可以看见计算节点根网络节道隧道已经建立。
Cinder配置 配置控制节点 创建数据库 create database cinder; 用户授权1
2
3
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' identified by '123456';
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' identified by '123456';
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'control-node1.novalocal' identified by '123456';
创建用户openstack user create --domain default --password-prompt cinder
给cinder用户赋予admin权限openstack role add --project service --user cinder admin
openstack service create --name cinder --description "OpenStack Block Storage" volume
openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
创建endpoint1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
openstack endpoint create --region RegionOne \
volume public http://192.168.122.2:8776/v1/%\(tenant_id\)s
openstack endpoint create --region RegionOne \
volume internal http://192.168.4.6:8776/v1/%\(tenant_id\)s
openstack endpoint create --region RegionOne \
volume admin http://192.168.4.6:8776/v1/%\(tenant_id\)s
openstack endpoint create --region RegionOne \
volumev2 public http://192.168.122.2:8776/v2/%\(tenant_id\)s
openstack endpoint create --region RegionOne \
volumev2 internal http://192.168.4.6:8776/v2/%\(tenant_id\)s
openstack endpoint create --region RegionOne \
volumev2 admin http://192.168.4.6:8776/v2/%\(tenant_id\)s
安装cinder yum install openstack-cinder
vim /etc/cinder/cinder.conf1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
[DEFAULT]
transport_url = rabbit://openstack:123456@192.168.4.6
auth_strategy = keystone
[database]
connection = mysql+pymysql://cinder:123456@192.168.4.6/cinder
[keystone_authtoken]
auth_uri = http://192.168.4.6:5000
auth_url = http://192.168.4.6:35357
memcached_servers = 192.168.4.6:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = 123456
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
同步数据库 su -s /bin/sh -c “cinder-manage db sync” cinder
配置计算机节点使用cinder vim /etc/nova/nova.conf
[cinder] os_region_name = RegionOne
重启服务 systemctl restart openstack-nova-api.service
设置开机自启cinder systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service
配置一个存储节点 安装lvm yum install lvm2
systemctl enable lvm2-lvmetad.service systemctl start lvm2-lvmetad.service
创建个lvm卷 pvcreate /dev/vdb
创建vg vgcreate cinder-volumes /dev/vdb
vim /etc/lvm/lvm.conf1
2
3
devices {
filter = [ "a/vdb/", "r/.*/"]
}
安装软件 yum install openstack-cinder targetcli python-keystone
修改cinder配置文件 vim /etc/cinder/cinder.conf1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
[DEFAULT]
transport_url = rabbit://openstack:123456@192.168.4.6
verbose = True
auth_strategy = keystone
enabled_backends = lvm
glance_api_servers = http://192.168.4.6:9292
[database]
connection = mysql+pymysql://cinder:123456@192.168.4.6/cinder
[keystone_authtoken]
auth_uri = http://192.168.4.6:5000
auth_url = http://192.168.4.6:35357
memcached_servers = 192.168.4.6:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = 123456
[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
iscsi_protocol = iscsi
iscsi_helper = lioadm
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
systemctl enable openstack-cinder-volume.service target.service
systemctl start openstack-cinder-volume.service target.service
到控制台创建个卷,并挂载到云主机。
Ceilometer配置 Ceilometer使用Mongdb存储meter数据,所以需要先在控制节点安装Mongdb
在控制节点安装Mongdb yum install mongodb-server mongodb
配置Mongdb vim /etc/mongod.conf smallfiles = true #限制日志大小 创建Mongdb数据库和帐户授权 替换123456为你自己设置的密码 创建用户 openstack user create –domain default –password-prompt ceilometer
给ceilometer用户添加admin角色权限 openstack role add –project service –user ceilometer admin 创建ceilometer service openstack service create –name ceilometer –description “Telemetry” metering
创建ceilometer endpoint1
2
3
openstack endpoint create --region RegionOne metering public http://192.168.122.2:8777
openstack endpoint create --region RegionOne metering admin http://192.168.4.6:8777
openstack endpoint create --region RegionOne metering internal http://192.168.4.6:8777
安装包 yum install openstack-ceilometer-api \ openstack-ceilometer-collector \ openstack-ceilometer-notification \ openstack-ceilometer-central \ python-ceilometerclient
配置ceilometer vim /etc/ceilometer/ceilometer.conf1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
[DEFAULT]
rpc_backend = rabbit
auth_strategy = keystone
[oslo_messaging_rabbit]
rabbit_host = 192.168.4.6
rabbit_userid = openstack
rabbit_password = 123456
[keystone_authtoken]
auth_uri = http://192.168.4.6:5000
auth_url = http://192.168.4.6:35357
memcached_servers = 192.168.4.6:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = ceilometer
password = 123456
[service_credentials]
auth_type = password
auth_url = http://192.168.4.6:5000/v3
project_domain_name = default
user_domain_name = default
project_name = service
username = ceilometer
password = 123456 ##密码替换成在keystone创建ceilometer时设置的密码
interface = internalURL
region_name = RegionOne
创建ceilometer的vhost
vim /etc/httpd/conf.d/wsgi-ceilometer.conf
1
2
3
4
5
6
7
8
9
10
11
12
Listen 8777
<VirtualHost *:8777>
WSGIDaemonProcess ceilometer-api processes=2 threads=10 user=ceilometer group=ceilometer display-name=%{GROUP}
WSGIProcessGroup ceilometer-api
WSGIScriptAlias / /usr/lib/python2.7/site-packages/ceilometer/api/app.wsgi
WSGIApplicationGroup %{GLOBAL}
ErrorLog /var/log/httpd/ceilometer_error.log
CustomLog /var/log/httpd/ceilometer_access.log combined
</VirtualHost>
WSGISocketPrefix /var/run/httpd
重启httpd systemctl reload httpd.service 设置服务开机启动 systemctl enable openstack-ceilometer-notification.service \ openstack-ceilometer-central.service \ openstack-ceilometer-collector.service
启动进程 systemctl start openstack-ceilometer-notification.service \ openstack-ceilometer-central.service \ openstack-ceilometer-collector.service
配置glance的ceilometer统计 vim /etc/glance/glance-api.conf
1
2
3
4
5
6
7
8
9
10
[DEFAULT]
rpc_backend = rabbit
[oslo_messaging_amqp]
driver = messagingv2
[oslo_messaging_rabbit]
rabbit_host = 192.168.4.6
rabbit_userid = openstack
rabbit_password = 123456
重启进程 systemctl restart openstack-glance-api.service openstack-glance-registry.service
配置nova的ceilometer统计 安装软件 yum install openstack-ceilometer-compute python-ceilometerclient python-pecan
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
[DEFAULT]
rpc_backend = rabbit
auth_strategy = keystone
[oslo_messaging_rabbit]
rabbit_host = 192.168.4.6
rabbit_userid = openstack
rabbit_password = 123456
[keystone_authtoken]
auth_uri = http://192.168.4.6:5000
auth_url = http://192.168.4.6:35357
memcached_servers = 192.168.4.6:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = ceilometer
password = 123456 #将密码替换成keystone创建ceilometer用户时设置的密码
[service_credentials]
auth_url = http://192.168.4.6:5000
project_domain_id = default
user_domain_id = default
auth_type = password
username = ceilometer
project_name = service
password = 123456 #将密码替换成keystone创建ceilometer用户时设置的密码
interface = internalURL
region_name = RegionOne
修改nova-compute配置文件
vim /etc/nova/nova.conf
1
2
3
4
5
6
7
[DEFAULT]
instance_usage_audit = True
instance_usage_audit_period = hour
notify_on_state_change = vm_and_task_state
[oslo_messaging_amqp]
driver = messagingv2
设置开机启动 systemctl enable openstack-ceilometer-compute.service 启动ceilometer-compute进程 systemctl start openstack-ceilometer-compute.service 重启nova-compute systemctl restart openstack-nova-compute.service
配置块设备使用ceilometer计量服务
验证 ceilometer meter-list
正常情况下是会出现如上图一些资源的数据的,但我这里默认报
打debug
访问被拒绝 解决办法: 修改httpd.conf
systemctl restart httpd 在测试应该没问题了。
Aodh报警服务: 创件aodh数据库 create database aodh;
授权1
2
3
GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'localhost' identified by '123456';
GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'%' identified by '123456';
GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'control-node1.novalocal' identified by '123456';
创建用户openstack user create --domain default --password-prompt aodh
给adoh帐户添加admin权限openstack role add --project service --user aodh admin
添加服务openstack service create --name aodh --description "Telemetry" alarming
创建endpoint1
2
3
openstack endpoint create --region RegionOne alarming public http://192.168.122.2:8042
openstack endpoint create --region RegionOne alarming internal http://192.168.4.6:8042
openstack endpoint create --region RegionOne alarming admin http://192.168.4.6:8042
安装软件 yum install openstack-aodh-api \ openstack-aodh-evaluator \ openstack-aodh-notifier \ openstack-aodh-listener \ openstack-aodh-expirer \ python-aodhclient
修改配置文件
vim /etc/aodh/aodh.conf1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
[DEFAULT]
transport_url = rabbit://openstack:123456@192.168.4.6
auth_strategy = keystone
[database]
connection = mysql+pymysql://aodh:123456@192.168.4.6/aodh
[keystone_authtoken]
auth_uri = http://192.168.4.6:5000
auth_url = http://192.168.4.6:35357
memcached_servers = 192.168.4.6:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = aodh
password = 123456 #填写通过keystone创建帐户时设置的帐号和密码
[service_credentials]
auth_type = password
auth_url = http://192.168.4.6:5000/v3
project_domain_name = default
user_domain_name = default
project_name = service
username = aodh
password = 123456 #填写通过keystone创建帐户时设置的帐号和密码
interface = internalURL
region_name = RegionOne
systemctl enable openstack-aodh-api.service \ openstack-aodh-evaluator.service \ openstack-aodh-notifier.service \ openstack-aodh-listener.service
systemctl start openstack-aodh-api.service \ openstack-aodh-evaluator.service \ openstack-aodh-notifier.service \ openstack-aodh-listener.service