Christopher Kim, Carson ChowUnderstanding the recurrent dynamics of cortical circuits engaged in complex tasks is one of the central questions in computational neuroscience. Most of recent studies train the output of recurrent models to per- form cognitive or motor tasks and investigate if the recurrent dynamics emerging from task- driven learning can explain neuronal data. However, the possible range of recurrent dynamics that can be realized within a recurrent model after learning, particularly in a spiking neural network, is not well understood. In this study, we focus on investigating spiking network’s capability to learn recurrent dynamics and characterize the learning capacity in terms of network size, intrinsic synaptic decay time and target decay time. We find that, by modifying recurrent synaptic weights, spiking networks can generate arbitrarily complex recurrent patterns if (1) the target patterns can be produced self-consistently, (2) the synaptic dynamics are fast enough to track the targets, and (3) the number of neurons in the network is large enough for noisy postsynaptic currents to approximate the targets. We examine spiking network’s learning capacity analytically and corroborate the predictions by training spiking networks to learn arbitrary patterns and in-vivo cortical activity. Furthermore, we show that a trained network can operate in balanced state if the total excitatory and inhibitory synaptic weights to each neuron are constrained to preserve the balanced network structure. Under such synaptic constraints, the trained network generates spikes at the desired rate with large trial-to-trial variability and exhibits paradoxical features of inhibition-stabilized network.These results show that spiking neural networks with fast synapses and a large number of neurons can generate arbitrarily complex dynamics. When learning is not optimal, our findings can suggest potential sources of learning errors. Moreover, networks can be trained in dynamic regime relevant to cortical circuits.