Core dump in container

Hi,

Our app is generating core dump files in the docker container. What is the best tool/way to debug and look inside of those files? How can I get the files down to my local machine to look inside those files?

Thanks,

Make use of a volume or host mount when you start the container. Then when a core dump is created move it to the volume/host mount and then analyse it from the host.

Is there a way to configure the container to generate cores in the dedicated volume/host mount ?
or should I write a script that scans periodically the working dir and move the cores when it finds it

Thanks

Not the container, but I would say the app can be configured to where to place the core dump.

Yes, you’re right.
I’ve found to configure the core location: https://sigquit.wordpress.com/2009/03/13/the-core-pattern/
thanks

I appreciate this is a few months old.

Can I confirm that you configured the core pattern in the container only?

I agree this is over 3 years old, with the last reply from 2 years ago, but just in case somebody stumbles upon this while searching, I’ll leave my 2 cents.

First, analyzing core dumps on the host is not a good idea, sorry. I’ve had problems taking coredumps generated in a Ubuntu 16.04 container and analyzing them in gdb on 18.04, not to mention CentOS. gdb needs to be able to find binaries and libraries involved in the process that ran, and of course libc from CentOS won’t match the one you ran with on Ubuntu. You need the same libs in order to properly analyze the core dump, so it’s best to analyze them in the container, or on a host / in a container running the same OS and libs. Also, the binary running inside the container might not be present on the host. For coredumps generated by our app in a container, I keep a container of the same OS on my machine and run gdb there to analyze it.

Also, core_pattern is system-wide as far as I can tell, it can’t be set per container.