I don’t think time is the issue here, its memory. 03:37:43 is when the build starts and 04:41:31 is when its c++: fatal error: Killed signal terminated program cc1plus.

Admittedly, that’s suspiciously close to 1 hour, if the build duration maximum was actually 1 hour in stead of 2…

Another job failed with the following, which I’m not 100% sure how to read into, but I could believe that’s another instantiation of “I’m out of memory” unless someone knows its due to something else → I see jobs where this happens after 30 minutes - 1 hour. Most jobs are actually failing this way (not c++: fatal error: Killed signal terminated program cc1plus), but only for bin jobs.


03:37:42 FATAL: command execution failed
03:37:42 java.nio.channels.ClosedChannelException
03:37:42 	at org.jenkinsci.remoting.protocol.impl.ChannelApplicationLayer.onReadClosed(ChannelApplicationLayer.java:209)
03:37:42 	at org.jenkinsci.remoting.protocol.ApplicationLayer.onRecvClosed(ApplicationLayer.java:221)
03:37:42 	at org.jenkinsci.remoting.protocol.ProtocolStack$Ptr.onRecvClosed(ProtocolStack.java:817)
03:37:42 	at org.jenkinsci.remoting.protocol.FilterLayer.onRecvClosed(FilterLayer.java:288)
03:37:42 	at org.jenkinsci.remoting.protocol.impl.SSLEngineFilterLayer.onRecvClosed(SSLEngineFilterLayer.java:179)
03:37:42 	at org.jenkinsci.remoting.protocol.impl.SSLEngineFilterLayer.switchToNoSecure(SSLEngineFilterLayer.java:281)
03:37:42 	at org.jenkinsci.remoting.protocol.impl.SSLEngineFilterLayer.processWrite(SSLEngineFilterLayer.java:501)
03:37:42 	at org.jenkinsci.remoting.protocol.impl.SSLEngineFilterLayer.processQueuedWrites(SSLEngineFilterLayer.java:246)
03:37:42 	at org.jenkinsci.remoting.protocol.impl.SSLEngineFilterLayer.doSend(SSLEngineFilterLayer.java:198)
03:37:42 	at org.jenkinsci.remoting.protocol.impl.SSLEngineFilterLayer.doCloseSend(SSLEngineFilterLayer.java:211)
03:37:42 	at org.jenkinsci.remoting.protocol.ProtocolStack$Ptr.doCloseSend(ProtocolStack.java:785)
03:37:42 	at org.jenkinsci.remoting.protocol.ApplicationLayer.doCloseWrite(ApplicationLayer.java:172)
03:37:42 	at org.jenkinsci.remoting.protocol.impl.ChannelApplicationLayer$ByteBufferCommandTransport.closeWrite(ChannelApplicationLayer.java:311)
03:37:42 	at hudson.remoting.Channel.close(Channel.java:1502)
03:37:42 	at hudson.remoting.Channel.close(Channel.java:1455)
03:37:42 	at hudson.slaves.SlaveComputer.closeChannel(SlaveComputer.java:884)
03:37:42 	at hudson.slaves.SlaveComputer.access$100(SlaveComputer.java:110)
03:37:42 	at hudson.slaves.SlaveComputer$2.run(SlaveComputer.java:765)
03:37:42 	at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28)
03:37:42 	at jenkins.security.ImpersonatingExecutorService$1.run(ImpersonatingExecutorService.java:68)
03:37:42 	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
03:37:42 	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
03:37:42 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
03:37:42 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
03:37:42 	at java.lang.Thread.run(Thread.java:750)
03:37:42 Caused: java.io.IOException: Backing channel 'JNLP4-connect connection from ip-10-0-1-232.us-west-1.compute.internal/10.0.1.232:51244' is disconnected.