I am trying to build a docker image for a .net core NUnit test by first creating and running an image of the test.dll which in turn executes the nunit test which uses the .net core Process class to launch the build process for the application under test. This works quite well on windows 10, .net core 3.1 and docker engine 19 on my local machine. But it fails miserably on Linux teamcity when it hits the dotnet restore command because it can’t reach https://api.nuget.org/v3/index.json.
I suspect that there is a network configuration issue on the Linux host but not being a Linux networking expert (or even rank amateur), I am a bit lost on how to resolve.
I am specifying /bin/bash as execution file to Process and “-c “docker build -f Dockerfile -t subscriptionreader .”” is the actual command to execute. everything goes well until it hits the dotnet restore command.
The server is running a 2019.x version of teamcity, centos7 (recent but unknown version). I am still trying to get the network people to tell me the docker version. and my dockerfile is using ubuntu 18.04 to comply with docker documentation for running docker commands from docker. Dockerfiles run directly by teamcity succeed.
I did a full writeup on stack overflow: https://stackoverflow.com/questions/61548410/how-to-run-tests-against-live-docker-containers-net-core-in-teamcity-pipeline
there is probably a more direct/cleaner way to do what I am trying to do but it eludes me. Basically I want to leverage my development components to conduct live run-time testing as opposed to relying strictly on a moq strategy. so why not just launch the app container and test? I prefer keeping my current strategy but am open to other implementations of the general idea.