We prove a general existence result in stochastic optimal control in discrete time where controls take values in conditional metric spaces, and depend on the current state and the information of past decisions through the evolution of a recursively defined forward process. The generality of the problem lies beyond the scope of standard techniques in stochastic control theory such as random sets, normal integrands and measurable selection theory. The main novelty is a formalization in conditional metric space and the use of techniques in conditional analysis. We illustrate the existence result by several examples including wealth-dependent utility maximization under risk constraints with bounded and unbounded wealth-dependent control sets, utility maximization with a measurable dimension, and dynamic risk sharing. Finally, we discuss how conditional analysis relates to random set theory.