cf dev start: Failed to deploy services: Failed to deploy PAS: exit status 1 #131
Description
After installing cf cli and cfdev plugin and downloading extra asset pcfdev-v1.2.0 for PAS 2.4.4, i try to start cf dev and encounter the following error message:
cf dev start -f pcfdev-v1.2.0-darwin.tgz
Downloading Network Helper...
Progress: |====================>| 100.0%
Installing cfdevd network helper (requires administrator privileges)...
Password:
Setting up IP aliases for the BOSH Director & CF Router (requires administrator privileges)
Downloading Resources...
Progress: |====================>| 100.0%
Setting State...
Pivotal Telemetry
The Pivotal Telemetry program (“Pivotal Telemetry”) provides Pivotal Software, Inc. (“Pivotal”) with information that enables us to improve our products and services, fix problems, and advise you on how best to deploy and use our products. As part of Pivotal Telemetry, Pivotal collects technical information about your organization’s use of our products and services on a regular basis. Information collected under Pivotal Telemetry does not personally identify any individual.
Additional information regarding the data collected through Pivotal Telemetry and the purposes for which it is used by Pivotal is available on the Pivotal Telemetry page at https://pivotal.io/legal/telemetry.
By opting into the Pivotal Telemetry program, you understand and agree to the collection of product usage data in accordance with the program description at https://pivotal.io/legal/telemetry. If you prefer not to participate in Pivotal Telemetry, you should not join. You may join or leave Pivotal Telemetry at any time.
Are you ok with PCF Dev periodically capturing anonymized telemetry [y/N]?
> N
Creating the VM...
Starting VPNKit...
Waiting for the VM...
Deploying the BOSH Director...
Deploying PAS...
Progress: 3 of 5 (17m46s)FAILED
cf dev start: Failed to deploy services: Failed to deploy PAS: exit status 1
I can see in the log that task 576 has failed:
vim ~/.cfdev/log/deploy-pas.log
...... shift + g
Task 576 | 18:20:33 | Preparing deployment: Preparing deployment (00:00:21)
Task 576 | 18:21:42 | Preparing package compilation: Finding packages to compile (00:00:01)
Task 576 | 18:21:43 | Creating missing vms: database/52af32cd-1e7a-4402-9637-faa7db61f7e5 (0)
Task 576 | 18:21:43 | Creating missing vms: blobstore/88b7c940-bc26-4076-8fca-2ed081f3e38e (0)
Task 576 | 18:21:43 | Creating missing vms: compute/cd35cf7d-9851-49ec-8956-803dc2dbb587 (0)
Task 576 | 18:21:43 | Creating missing vms: router/d65403b4-e883-4bdd-a0aa-904dc16eb2e6 (0)
Task 576 | 18:21:43 | Creating missing vms: control/b9821928-8489-417f-815e-eb21e1203c49 (0)
Task 576 | 18:21:58 | Creating missing vms: router/d65403b4-e883-4bdd-a0aa-904dc16eb2e6 (0) (00:00:15)
Task 576 | 18:22:00 | Creating missing vms: blobstore/88b7c940-bc26-4076-8fca-2ed081f3e38e (0) (00:00:17)
Task 576 | 18:22:01 | Creating missing vms: database/52af32cd-1e7a-4402-9637-faa7db61f7e5 (0) (00:00:18)
Task 576 | 18:22:02 | Creating missing vms: compute/cd35cf7d-9851-49ec-8956-803dc2dbb587 (0) (00:00:19)
Task 576 | 18:22:13 | Creating missing vms: control/b9821928-8489-417f-815e-eb21e1203c49 (0) (00:00:30)
Task 576 | 18:22:14 | Updating instance database: database/52af32cd-1e7a-4402-9637-faa7db61f7e5 (0) (canary) (00:01:49)
Task 576 | 18:24:03 | Updating instance blobstore: blobstore/88b7c940-bc26-4076-8fca-2ed081f3e38e (0) (canary) (00:00:56)
Task 576 | 18:24:59 | Updating instance control: control/b9821928-8489-417f-815e-eb21e1203c49 (0) (canary) (00:06:14)
Task 576 | 18:31:13 | Updating instance compute: compute/cd35cf7d-9851-49ec-8956-803dc2dbb587 (0) (canary) (00:07:01)
L Error: 'compute/cd35cf7d-9851-49ec-8956-803dc2dbb587 (0)' is not running after update. Review logs for failed jobs: iptables-logger
Task 576 | 18:38:14 | Error: 'compute/cd35cf7d-9851-49ec-8956-803dc2dbb587 (0)' is not running after update. Review logs for failed jobs: iptables-logger
Task 576 Started Sat May 9 18:20:33 UTC 2020
Task 576 Finished Sat May 9 18:38:14 UTC 2020
Task 576 Duration 00:17:41
Task 576 error
Updating deployment:
Expected task '576' to succeed but state is 'error'
Exit code 1
When i retrieve the logs with bosh, and look inside the iptables-logger logs is see the following contents:
Bosh -d cf-66ade9481d314315358c logs
vim /Users/guicey/workspace/pcf-dev/cf-66ade9481d314315358c-20200509-204542-469776/compute.cd35cf7d-9851-49ec-8956-803dc2dbb587.2020-05-09-18-45-41/iptables-logger/iptables-logger.stderr.log
panic: open /var/log/kern.log: no such file or directory
goroutine 1 [running]:
code.cloudfoundry.org/lager.(*logger).Fatal(0xc42005a3c0, 0x75e58e, 0xa, 0x79a8a0, 0xc42007d980, 0x0, 0x0, 0x0)
/var/vcap/data/compile/iptables-logger/src/code.cloudfoundry.org/lager/logger.go:162 +0x60f
main.main()
/var/vcap/data/compile/iptables-logger/src/iptables-logger/cmd/iptables-logger/main.go:66 +0x10b3
panic: open /var/log/kern.log: no such file or directory
goroutine 1 [running]:
code.cloudfoundry.org/lager.(*logger).Fatal(0xc420096360, 0x75e58e, 0xa, 0x79a8a0, 0xc420081980, 0x0, 0x0, 0x0)
/var/vcap/data/compile/iptables-logger/src/code.cloudfoundry.org/lager/logger.go:162 +0x60f
main.main()
/var/vcap/data/compile/iptables-logger/src/iptables-logger/cmd/iptables-logger/main.go:66 +0x10b3
panic: open /var/log/kern.log: no such file or directory
goroutine 1 [running]:
code.cloudfoundry.org/lager.(*logger).Fatal(0xc42005a3c0, 0x75e58e, 0xa, 0x79a8a0, 0xc42006f980, 0x0, 0x0, 0x0)
/var/vcap/data/compile/iptables-logger/src/code.cloudfoundry.org/lager/logger.go:162 +0x60f
main.main()
/var/vcap/data/compile/iptables-logger/src/iptables-logger/cmd/iptables-logger/main.go:66 +0x10b3
panic: open /var/log/kern.log: no such file or directory
goroutine 1 [running]:
code.cloudfoundry.org/lager.(*logger).Fatal(0xc420096360, 0x75e58e, 0xa, 0x79a8a0, 0xc420081980, 0x0, 0x0, 0x0)
/var/vcap/data/compile/iptables-logger/src/code.cloudfoundry.org/lager/logger.go:162 +0x60f
main.main()
/var/vcap/data/compile/iptables-logger/src/iptables-logger/cmd/iptables-logger/main.go:66 +0x10b3
panic: open /var/log/kern.log: no such file or directory
When i check the proces statuses of the instances with bosh, i see the following:
bosh instances --ps
Using environment '10.144.0.2' as client 'ops_manager'
Task 795. Done
Deployment 'cf-66ade9481d314315358c'
Instance Process Process State AZ IPs Deployment
blobstore/88b7c940-bc26-4076-8fca-2ed081f3e38e - running null 10.144.0.4 cf-66ade9481d314315358c
~ blobstore_nginx running - - -
~ blobstore_url_signer running - - -
~ bosh-dns running - - -
~ bosh-dns-healthcheck running - - -
~ bosh-dns-resolvconf running - - -
~ loggregator_agent running - - -
~ route_registrar running - - -
compute/cd35cf7d-9851-49ec-8956-803dc2dbb587 - failing null 10.144.0.6 cf-66ade9481d314315358c
~ bosh-dns running - - -
~ bosh-dns-adapter running - - -
~ bosh-dns-healthcheck running - - -
~ bosh-dns-resolvconf running - - -
~ garden running - - -
~ iptables-logger failing - - -
~ loggregator_agent running - - -
~ netmon running - - -
~ rep running - - -
~ route_emitter running - - -
~ silk-daemon running - - -
~ vxlan-policy-agent running - - -
control/b9821928-8489-417f-815e-eb21e1203c49 - running null 10.144.0.5 cf-66ade9481d314315358c
~ adapter running - - -
~ auctioneer running - - -
~ bbs running - - -
~ bosh-dns running - - -
~ bosh-dns-healthcheck running - - -
~ bosh-dns-resolvconf running - - -
~ cc_deployment_updater running - - -
~ cc_uploader running - - -
~ cloud_controller_clock running - - -
~ cloud_controller_ng running - - -
~ cloud_controller_worker_1 running - - -
~ cloud_controller_worker_local_1 running - - -
~ credhub running - - -
~ doppler running - - -
~ file_server running - - -
~ locket running - - -
~ log-cache running - - -
~ log-cache-cf-auth-proxy running - - -
~ log-cache-expvar-forwarder running - - -
~ log-cache-gateway running - - -
~ log-cache-nozzle running - - -
~ log-cache-scheduler running - - -
~ loggregator_agent running - - -
~ loggregator_trafficcontroller running - - -
~ nginx_cc running - - -
~ policy-server running - - -
~ policy-server-internal running - - -
~ reverse_log_proxy running - - -
~ reverse_log_proxy_gateway running - - -
~ route_registrar running - - -
~ routing-api running - - -
~ scheduler running - - -
~ service-discovery-controller running - - -
~ silk-controller running - - -
~ statsd_injector running - - -
~ tps_watcher running - - -
~ uaa running - - -
database/52af32cd-1e7a-4402-9637-faa7db61f7e5 - running null 10.144.0.3 cf-66ade9481d314315358c
~ bosh-dns running - - -
~ bosh-dns-healthcheck running - - -
~ bosh-dns-resolvconf running - - -
~ cluster-health-logger running - - -
~ consul_agent running - - -
~ galera-agent running - - -
~ galera-init running - - -
~ gra-log-purger running - - -
~ loggregator_agent running - - -
~ nats running - - -
~ proxy running - - -
~ route_registrar running - - -
router/d65403b4-e883-4bdd-a0aa-904dc16eb2e6 - - null 10.144.0.34 cf-66ade9481d314315358c
5 instances
Succeeded
I'm not sure why this is happening, I have followed the steps in your readme. I can see that process iptables-logger fails but other processes seems to be running. Is cf dev running now and can i ignore the error message? Is there a way to disable iptables-logger job when i start pcf-dev?
System info
Model Name: MacBook Pro (MacOS Catalina 10.15.4)
Model Identifier: MacBookPro16,1 (2019)
Processor Name: 8-Core Intel Core i9
Processor Speed: 2,3 GHz
Number of Processors: 1
Total Number of Cores: 8
L2 Cache (per Core): 256 KB
L3 Cache: 16 MB
Hyper-Threading Technology: Enabled
Memory: 32 GB
Cloud Foundry command line tool version:
cf -v
cf version 6.51.0+2acd15650.2020-04-07
cfdev plugin version:
cf dev version
CLI: 0.0.17
BUILD: 14 (b36c82e)
pas: 2.4.4
p.mysql: 2.5.3-build.7
p.redis: 2.0.1
p.rabbitmq: 1.15.5
p.spring-cloud-services: 2.0.7
bosh cli version:
bosh -v
version 6.2.1-a28042ac-2020-02-10T18:41:00Z
Succeeded