Container runs out of memory or other resources

We run our application through vagrant using docker as the provider. Some time after manually starting the docker service (sudo dockerd), sometimes as short as a few minutes, others as long as a few hours, a variety of errors begin to occur. Restarting my computer sometimes solves the problem for a short time, but it always returns, restarting the docker service or vagrant destroy or completely removing and recreating the docker container do not fix the problem. Here are copies of my terminal outputs, where I started dockerd immediately on login then ran vagrant up, vagrant ssh, then was unable to start our application, or do anything from inside the container.

dockerd output
$ sudo dockerd
INFO[2018-01-10T10:13:48.976566572-07:00] libcontainerd: started new docker-containerd process
pid=1387
INFO[0000] starting containerd                           module=containerd revision=89623f28b87a6004d4b785663257362d1658a729 version=v1.0.0
INFO[0000] setting subreaper...                          module=containerd
INFO[0000] changing OOM score to -500                    module=containerd
INFO[0000] loading plugin "io.containerd.content.v1.content"...  module=containerd type=io.containerd.content.v1
INFO[0000] loading plugin "io.containerd.snapshotter.v1.btrfs"...  module=containerd type=io.containerd.snapshotter.v1
WARN[0000] failed to load plugin io.containerd.snapshotter.v1.btrfs  error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter" module=containerd
INFO[0000] loading plugin "io.containerd.snapshotter.v1.overlayfs"...  module=containerd type=io.containerd.snapshotter.v1
INFO[0000] loading plugin "io.containerd.metadata.v1.bolt"...  module=containerd type=io.containerd.metadata.v1
WARN[0000] could not use snapshotter btrfs in metadata plugin  error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter" module="containerd/io.containerd.metadata.v1.bolt"
INFO[0000] loading plugin "io.containerd.differ.v1.walking"...  module=containerd type=io.containerd.differ.v1
INFO[0000] loading plugin "io.containerd.gc.v1.scheduler"...  module=containerd type=io.containerd.gc.v1
INFO[0000] loading plugin "io.containerd.grpc.v1.containers"...  module=containerd type=io.containerd.grpc.v1
INFO[0000] loading plugin "io.containerd.grpc.v1.content"...  module=containerd type=io.containerd.grpc.v1
INFO[0000] loading plugin "io.containerd.grpc.v1.diff"...  module=containerd type=io.containerd.grpc.v1
INFO[0000] loading plugin "io.containerd.grpc.v1.events"...  module=containerd type=io.containerd.grpc.v1
INFO[0000] loading plugin "io.containerd.grpc.v1.healthcheck"...  module=containerd type=io.containerd.grpc.v1
INFO[0000] loading plugin "io.containerd.grpc.v1.images"...  module=containerd type=io.containerd.grpc.v1
INFO[0000] loading plugin "io.containerd.grpc.v1.leases"...  module=containerd type=io.containerd.grpc.v1
INFO[0000] loading plugin "io.containerd.grpc.v1.namespaces"...  module=containerd type=io.containerd.grpc.v1
INFO[0000] loading plugin "io.containerd.grpc.v1.snapshots"...  module=containerd type=io.containerd.grpc.v1
INFO[0000] loading plugin "io.containerd.monitor.v1.cgroups"...  module=containerd type=io.containerd.monitor.v1
INFO[0000] loading plugin "io.containerd.runtime.v1.linux"...  module=containerd type=io.containerd.runtime.v1
INFO[0000] loading plugin "io.containerd.grpc.v1.tasks"...  module=containerd type=io.containerd.grpc.v1
INFO[0000] loading plugin "io.containerd.grpc.v1.version"...  module=containerd type=io.containerd.grpc.v1
INFO[0000] loading plugin "io.containerd.grpc.v1.introspection"...  module=containerd type=io.containerd.grpc.v1
INFO[0000] serving...                                    address="/var/run/docker/containerd/docker-containerd-debug.sock" module="containerd/debug"
INFO[0000] serving...                                    address="/var/run/docker/containerd/docker-containerd.sock" module="containerd/grpc"
INFO[0000] containerd successfully booted in 0.230334s   module=containerd
INFO[2018-01-10T10:13:49.989310415-07:00] [graphdriver] using prior storage driver: overlay2
INFO[2018-01-10T10:13:55.113622738-07:00] Graph migration to content-addressability took 0.00 seconds
WARN[2018-01-10T10:13:55.114869326-07:00] Your kernel does not support cgroup rt period
WARN[2018-01-10T10:13:55.115001693-07:00] Your kernel does not support cgroup rt runtime
INFO[2018-01-10T10:13:55.117439113-07:00] Loading containers: start.
INFO[2018-01-10T10:13:56.905797312-07:00] ignoring event                                module=libcontainerd namespace=moby topic=/containers/delete type="*events.ContainerDelete"
INFO[2018-01-10T10:14:05.674152149-07:00] Removing stale sandbox 569a1dfbc65b57dba252376c58e64cf88d81df98fde17858d1d87d8e9a05c36f (b3d25849d652f7e19cb39d90e843b779d8b1b038dc6a330d99ebba6130f33e73) 
WARN[2018-01-10T10:14:05.837864861-07:00] Error (Unable to complete atomic operation, key modified) deleting object [endpoint f1d75c9f0b7b2356fa434dadf7f6a0725c099d08bbf9bde3ad5f94ef6e726f57 e3f2d76bc9c6f172f32bc742cbf6c796d68f7c0c583b1696b38df75e2fb51dd1], retrying....
INFO[2018-01-10T10:14:06.387910460-07:00] Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address 
INFO[2018-01-10T10:14:07.144265750-07:00] Loading containers: done.
WARN[2018-01-10T10:14:07.305161956-07:00] Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled 
INFO[2018-01-10T10:14:08.295549770-07:00] Docker daemon                                 commit=486a48d270 graphdriver(s)=overlay2 version=17.12.0-ce
INFO[2018-01-10T10:14:08.468428520-07:00] Daemon has completed initialization
INFO[2018-01-10T10:14:08.574466863-07:00] API listen on /var/run/docker.sock
INFO[2018-01-10T10:14:24.526045046-07:00] ignoring event                                module=libcontainerd namespace=moby topic=/containers/create type="*events.ContainerCreate"
INFO[0035] shim docker-containerd-shim started           address="/containerd-shim/moby/b3d25849d652f7e19cb39d90e843b779d8b1b038dc6a330d99ebba6130f33e73/shim.sock" debug=false module="containerd/tasks" pid=2708
WARN[2018-01-10T10:14:26.480781023-07:00] unknown container                             container=b3d25849d652f7e19cb39d90e843b779d8b1b038dc6a330d99ebba6130f33e73 module=libcontainerd namespace=plugins.moby
WARN[2018-01-10T10:14:27.008401930-07:00] unknown container                             container=b3d25849d652f7e19cb39d90e843b779d8b1b038dc6a330d99ebba6130f33e73 module=libcontainerd namespace=plugins.moby
Vagrant/Application output
$ vagrant up
Bringing machine 'default' up with 'docker' provider...
==> default: Vagrant has noticed that the synced folder definitions have changed.
==> default: With Docker, these synced folder changes won't take effect until you
==> default: destroy the container and recreate it.
==> default: Starting container...
==> default: Waiting for machine to boot. This may take a few minutes...
    default: SSH address: 172.17.0.2:2222
    default: SSH username: vagrant
    default: SSH auth method: private key
    default: Warning: Connection refused. Retrying...
    default: Warning: Connection refused. Retrying...
    default: Warning: Connection refused. Retrying...
^C==> default: Waiting for cleanup before exiting...
Vagrant exited after cleanup due to external interrupt.
$ vagrant up
Bringing machine 'default' up with 'docker' provider...
==> default: Machine already provisioned. Run `vagrant provision` or use the `--provision`
==> default: flag to force provisioning. Provisioners marked to run always will still run.
$ vagrant ssh
Last login: Wed Jan 10 12:11:55 2018 from 172.17.0.1
Have a lot of fun...
vagrant@K1902 ~/kilimanjaro
$ sbt
OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=256M; support was removed in 8.0
Error occurred during initialization of VM
java.lang.OutOfMemoryError: unable to create new native thread
	at java.lang.Thread.start0(Native Method)
	at java.lang.Thread.start(Thread.java:717)
	at java.lang.ref.Finalizer.<clinit>(Finalizer.java:226)

vagrant@K1902 ~/kilimanjaro
$ sbt
OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=256M; support was removed in 8.0
Error occurred during initialization of VM
java.lang.OutOfMemoryError: unable to create new native thread
vagrant@K1902 ~/kilimanjaro
$ sbt
OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=256M; support was removed in 8.0
sbt appears to be exiting abnormally.
  The log file for this session is at /tmp/sbt7289967183575680689.log
java.lang.OutOfMemoryError: unable to create new native thread
	at java.lang.Thread.start0(Native Method)
	at java.lang.Thread.start(Thread.java:717)
	at java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:957)
	at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1367)
	at sbt.EvaluateSettings.submit(INode.scala:74)
	at sbt.EvaluateSettings.sbt$EvaluateSettings$$submitEvaluate(INode.scala:69)
	at sbt.EvaluateSettings$INode.schedule(INode.scala:127)
	at sbt.EvaluateSettings$INode.register(INode.scala:119)
	at sbt.EvaluateSettings$INode.registerIfNew(INode.scala:113)
	at sbt.EvaluateSettings$$anonfun$run$2.apply(INode.scala:54)
	at sbt.EvaluateSettings$$anonfun$run$2.apply(INode.scala:54)
	at scala.collection.immutable.List.foreach(List.scala:318)
	at sbt.EvaluateSettings.run(INode.scala:54)
	at sbt.Init$class.sbt$Init$$applyInits(Settings.scala:209)
	at sbt.Init$class.make(Settings.scala:148)
	at sbt.Def$.make(Def.scala:10)
	at sbt.Load$$anonfun$8.apply(Load.scala:161)
	at sbt.Load$$anonfun$8.apply(Load.scala:156)
	at sbt.Load$.timed(Load.scala:1025)
	at sbt.Load$.apply(Load.scala:156)
	at sbt.Load$.buildPluginDefinition(Load.scala:886)
	at sbt.Load$.buildPlugins(Load.scala:852)
	at sbt.Load$.plugins(Load.scala:840)
	at sbt.Load$$anonfun$loadUnit$1$$anonfun$34.apply(Load.scala:465)
	at sbt.Load$$anonfun$loadUnit$1$$anonfun$34.apply(Load.scala:465)
	at sbt.Load$.timed(Load.scala:1025)
	at sbt.Load$$anonfun$loadUnit$1.apply(Load.scala:464)
	at sbt.Load$$anonfun$loadUnit$1.apply(Load.scala:459)
	at sbt.Load$.timed(Load.scala:1025)
	at sbt.Load$.loadUnit(Load.scala:459)
	at sbt.Load$$anonfun$25$$anonfun$apply$14.apply(Load.scala:311)
	at sbt.Load$$anonfun$25$$anonfun$apply$14.apply(Load.scala:310)
	at sbt.BuildLoader$$anonfun$componentLoader$1$$anonfun$apply$4$$anonfun$apply$5$$anonfun$apply$6.apply(BuildLoader.scala:91)
	at sbt.BuildLoader$$anonfun$componentLoader$1$$anonfun$apply$4$$anonfun$apply$5$$anonfun$apply$6.apply(BuildLoader.scala:90)
	at sbt.BuildLoader.apply(BuildLoader.scala:140)
	at sbt.Load$.loadAll(Load.scala:365)
	at sbt.Load$.loadURI(Load.scala:320)
	at sbt.Load$.load(Load.scala:316)
	at sbt.Load$.load(Load.scala:305)
	at sbt.Load$$anonfun$4.apply(Load.scala:146)
	at sbt.Load$$anonfun$4.apply(Load.scala:146)
	at sbt.Load$.timed(Load.scala:1025)
	at sbt.Load$.apply(Load.scala:146)
	at sbt.Load$.defaultLoad(Load.scala:39)
	at sbt.BuiltinCommands$.liftedTree1$1(Main.scala:503)
	at sbt.BuiltinCommands$.doLoadProject(Main.scala:503)
	at sbt.BuiltinCommands$$anonfun$loadProjectImpl$2.apply(Main.scala:495)
	at sbt.BuiltinCommands$$anonfun$loadProjectImpl$2.apply(Main.scala:495)
	at sbt.Command$$anonfun$applyEffect$1$$anonfun$apply$2.apply(Command.scala:59)
	at sbt.Command$$anonfun$applyEffect$1$$anonfun$apply$2.apply(Command.scala:59)
	at sbt.Command$$anonfun$applyEffect$2$$anonfun$apply$3.apply(Command.scala:61)
	at sbt.Command$$anonfun$applyEffect$2$$anonfun$apply$3.apply(Command.scala:61)
	at sbt.Command$.process(Command.scala:93)
	at sbt.MainLoop$$anonfun$1$$anonfun$apply$1.apply(MainLoop.scala:96)
	at sbt.MainLoop$$anonfun$1$$anonfun$apply$1.apply(MainLoop.scala:96)
	at sbt.State$$anon$1.runCmd$1(State.scala:183)
	at sbt.State$$anon$1.process(State.scala:187)
	at sbt.MainLoop$$anonfun$1.apply(MainLoop.scala:96)
	at sbt.MainLoop$$anonfun$1.apply(MainLoop.scala:96)
	at sbt.ErrorHandling$.wideConvert(ErrorHandling.scala:17)
	at sbt.MainLoop$.next(MainLoop.scala:96)
	at sbt.MainLoop$.run(MainLoop.scala:89)
	at sbt.MainLoop$$anonfun$runWithNewLog$1.apply(MainLoop.scala:68)
	at sbt.MainLoop$$anonfun$runWithNewLog$1.apply(MainLoop.scala:63)
	at sbt.Using.apply(Using.scala:24)
	at sbt.MainLoop$.runWithNewLog(MainLoop.scala:63)
	at sbt.MainLoop$.runAndClearLast(MainLoop.scala:46)
	at sbt.MainLoop$.runLoggedLoop(MainLoop.scala:30)
	at sbt.MainLoop$.runLogged(MainLoop.scala:22)
	at sbt.StandardMain$.runManaged(Main.scala:61)
	at sbt.xMain.run(Main.scala:35)
	at xsbt.boot.Launch$$anonfun$run$1.apply(Launch.scala:109)
	at xsbt.boot.Launch$.withContextLoader(Launch.scala:128)
	at xsbt.boot.Launch$.run(Launch.scala:109)
	at xsbt.boot.Launch$$anonfun$apply$1.apply(Launch.scala:35)
	at xsbt.boot.Launch$.launch(Launch.scala:117)
	at xsbt.boot.Launch$.apply(Launch.scala:18)
	at xsbt.boot.Boot$.runImpl(Boot.scala:41)
	at xsbt.boot.Boot$.main(Boot.scala:17)
	at xsbt.boot.Boot.main(Boot.scala)
Error during sbt execution: java.lang.OutOfMemoryError: unable to create new native thread
vagrant@K1902 ~/kilimanjaro
$ ls
-bash: fork: retry: No child processes
-bash: fork: retry: No child processes
-bash: fork: retry: No child processes
-bash: fork: retry: No child processes
-bash: fork: Resource temporarily unavailable
vagrant@K1902 ~/kilimanjaro
$ free
-bash: fork: retry: No child processes
             total       used       free     shared    buffers     cached
Mem:      12187076    8194380    3992696     628072     113140    2870820
-/+ buffers/cache:    5210420    6976656
Swap:      6097600          0    6097600
vagrant@K1902 ~/kilimanjaro
$

OS: Antergos Linux w/ kernel 4.14.12-1
Docker version: 1:17.12.0-1